Behavioural Data Science Week
This week's cover includes a fragment of an image by Houcine Ncib

Behavioural Data Science Week

Issue 33

April 24, 2025

Editorial Note

Happy Thursday, everyone—and thank you for your patience as I’ve been away on annual leave and reflecting on a topic that hits very close to home: toxic leadership and the possibility that AI might, in certain cases, be less damaging than some of the human bosses we have endured.

We have all heard it: “People don’t quit jobs—they quit bosses.” But what if you couldn’t quit? What if the behaviour from the top—hostile, erratic, manipulative—began to bleed downward, infecting the entire team? What if the problem wasn’t just one leader, but a system that rewards toxicity masked as competence or another leadership trait? One international study has reignited this question for me. It suggests something devastatingly simple: bad behaviour is contagious. When managers humiliate, micromanage, or abuse their position, their teams learn to do the same. Culture collapses. Burnout explodes. And at some point, you stop asking “who started this?” and start wondering: is there anyone left who can lead well?

So here is a provocation for this week: if we can’t count on human bosses to protect psychological safety, could a machine do better? Could AI lead more fairly, more consistently, more justly—if not more compassionately?

Let’s investigate.

Leave a 🌟 or “10” if this resonates—especially if you’ve ever worked under a boss who made you dread Mondays.

Yours in discovery,

Ganna

Image credit: Kelly Sikkema

Horrible Bosses, Helpful Machines?

Remember Horrible Bosses? The film gave us a comically exaggerated version of workplace tyranny—managers so outlandish in their cruelty that the only logical response was laughter (and maybe a revenge fantasy or two). Watching it, you would think no real leader could possibly behave that way. And yet, talk to enough people across industries and continents, and the stories begin to rhyme: erratic feedback, public humiliation disguised as “motivation,” burnout framed as personal weakness.

I have to say that I have been incredibly fortunate. Over the course of my career, I have worked under some of the most thoughtful, compassionate, and intellectually generous leaders one could hope for. The kind who listened more than they spoke. Who gave credit easily and criticism with care. Who made work feel not just like a place to perform, but a place to grow.

But not everyone is so lucky—and it raises a bigger question: if we struggle to consistently create humane leadership cultures, could AI ever be part of the solution?

Round One: Charisma vs. Consistency

There is something electrifying about a great boss. They walk into the room and the atmosphere shifts—not out of fear, but anticipation. These are the leaders who remember the details, who sense discomfort before it becomes disengagement, who deliver tough feedback in a way that lifts rather than flattens. At their best, human bosses inspire because they are present—not just physically, but emotionally, intellectually, and ethically. They know when to back off and when to lean in. Their charisma isn’t performative; it is relational. But this same presence—the spark that can light a team on fire with motivation—can also burn people. Emotional intelligence, in the wrong hands, becomes a manipulative tool. The charismatic boss who once lifted you might later gaslight you. The leader who once made you feel seen might suddenly turn distant or volatile. And the very human ability to adapt can quickly become inconsistency: one set of rules for you on Monday, a different set for someone else on Tuesday.

Now imagine AI as the counterpoint. No charisma. No personal warmth. But also, no mood swings. No favouritism. An AI leader doesn’t hold grudges. It doesn’t come into work frustrated from a difficult commute or distracted by an inbox avalanche. It shows up the same way every day—neutral, structured, and (ideally) consistent. It distributes attention evenly. It evaluates performance dispassionately. In contexts where emotional volatility causes harm—such as high-stakes financial trading or emergency dispatch teams—this kind of consistency might not just be helpful, it might be essential.

But that very neutrality has a cost. It cannot pull you out of a slump with a well-timed “I believe in you.” It can’t pause a meeting to say, “You don’t seem like yourself today. Want to talk?” When an AI detects a dip in your productivity, it flags it as an anomaly. When a human detects it, they might say, “Let’s get coffee.”

So what do we value more: the warmth of human intuition, even when it risks inconsistency? Or the comfort of structure, even when it comes at the expense of empathy?

Image credit: Marek Piwnicki

Round Two: Fairness vs. Favouritism

Fairness is one of the most cited virtues in workplace leadership—and one of the most misunderstood. Ask ten employees to define what a “fair” boss looks like, and you’ll get ten different answers: some want equal treatment, others equitable recognition, still others consistency in decision-making regardless of circumstance. But beneath those variations lies a common longing: we want to feel that the rules apply, and that they apply with integrity.

Human bosses—especially those who lead with empathy—often struggle here. The best leaders try to tailor their approach to individual needs. But this tailoring can slip into partiality, particularly when shaped by unconscious biases. A manager may give a high-visibility assignment to someone they instinctively “click” with, not because of merit, but because of comfort. They may mentor a young colleague who reminds them of their younger self—while overlooking someone with equal potential but a different background. Research on affinity bias and homophily has shown this time and again: when left unchecked, even well-meaning humans replicate exclusionary patterns without realising it.

Now, bring in AI and for a second entertain an idea that this is AI "done right". A well-designed AI system is indifferent to accents, alma maters, or after-hours camaraderie. It scores performance metrics, not personalities. It doesn’t confuse likability with competence. It doesn’t give the benefit of the doubt to those it intuitively trusts because—put simply—it doesn’t “trust” anyone. It just analyses. And in doing so, AI can be a powerful counterbalance to unconscious bias. It can audit hiring pipelines, flag discrepancies in feedback, and highlight employees who are delivering results but flying under the radar. It can give a voice to those who might otherwise be overlooked, not by outshining others, but by letting their data speak.

But this imagined fairness can easily become performative if the system’s inputs are flawed. If biased human decisions are what the AI learns from, then the system won’t remove bias—it will automate it. Worse, it will wrap that bias in a veneer of objectivity. Decisions that once felt personal now feel final. There is no arguing with a system that believes its patterns are truth.

So which would you choose: a fallible human leader who might—on a good day—override their bias for your sake? Or a machine that will never notice you unless your data conforms?

Image credit: Michiel Annaert

Round Three: Burnout vs. Bandwidth

There’s a particular kind of exhaustion that comes not from long hours, but from prolonged exposure to dissonance—when your values are in conflict with your reality, when your effort is invisible, when you are praised for being “resilient” while slowly disappearing. That’s burnout. And in workplaces shaped by toxic leadership, it isn’t rare. It’s routine. A recent study in Frontiers in Psychology captured this quite elegantly (at least, in my humble opinion): more than half of participants who reported abusive leadership also reported emotional exhaustion. The logic is tragically simple—when your boss creates fear instead of focus, defensiveness instead of development, exhaustion is not a malfunction. It is the system working as designed.

Now, human leaders—at their best—can break that cycle. They can notice your fatigue before you speak it. They can offer reprieve, validation, or even just silence. They know when to say, “Let’s cancel that call,” or, “Go home early, I’ve got this.” They lead not just from the head, but from the gut.

But human leaders burn out too. And when they do, they often become the very thing they once protected their teams from. Compassion fatigue sets in. Perspective shrinks. The boss who once championed well-being becomes the one saying, “We all have to push through.”

This is where AI, curiously, shines. It doesn’t burn out. Its memory doesn’t erode under pressure. It doesn’t forget to check in just because its own boss is breathing down its neck. An AI system can monitor your workload, notice changes in your engagement patterns, and gently nudge your manager with insights like: “Task volume has exceeded historical baselines for three consecutive weeks.” It doesn’t panic. It doesn’t blame. It just signals.

Yet it’s precisely that dispassionate logic that sometimes falls short. Because while AI might know when you’re burning out, it doesn’t know why. It doesn’t know that your partner is in hospital. That your child has stopped speaking. That you’re still showing up despite everything. And it definitely doesn’t know how to say, “That must be hard. I’m here.”

So we face a paradox: It would seem that AI can detect the signs of burnout sooner. But only humans can sit with the story behind them.

Image credit: Raul Gomez

Round Four: Vision vs. Calculation

Every great workplace transformation begins with a leap of imagination. A bold pivot. A belief in something that doesn’t yet exist. The best human leaders are visionaries—not because they always know what will work, but because they dare to ask what could. They rally people around missions. They embrace ambiguity. They get it wrong, but they learn loudly.

This kind of vision has changed the world. It has also, occasionally, ruined companies.

Because inspiration can slip into illusion. Charisma into hubris. There is a fine line between believing in the improbable and ignoring the impossible. Human leaders sometimes pursue vanity projects, fall in love with the sound of their own pitch, or cling to sunk costs long after logic says “walk away.” When the compass is emotional, the map can get blurry.

Enter AI. It is not poetic. It doesn’t dream in metaphors. But it sees what is. It models scenarios across millions of data points. It shows you what might happen, not what you wish would. It flags risks before they become crises. It spots trends no human could track in real time. It doesn’t get distracted by office politics or innovation theatre.

But AI can only work with what already exists. It can’t see around corners. It can’t say, “Let’s build something that’s never been done before.” It will never invent the moon landing—or a sustainable business model—out of sheer belief.

The question, then, is not which is better—but which is better when. Do you want boldness when the road is stable? Or do you want stability when the path ahead is unclear?

Maybe the wisest organisations are those who alternate between the visionary and the validator (or at least entertain such an alternative).

Image credit: Alex Paurariu

Final Round: Trust

And here we arrive at the deepest fault line—the one that runs beneath every team, every decision, every system: trust.

We trust human bosses, even flawed ones, because we see them try. Because we see them fail. Because, sometimes, we see them own their failures. Trust in human leadership is relational—it grows through apology, humour, consistency, vulnerability. We trust people not because they are perfect, but because they are accountable. But we also fear them (to be fair, "fear" is probably not the right word, but it is the closest word I seem to be able to find now, but I am sure you know what I mean). Fear their moods. Their unspoken assumptions. Their unshared intentions. The way their perception becomes reality, often without explanation. And it is this unpredictability that erodes psychological safety.

With AI, trust takes a different shape. It is less about intention and more about transparency. We don’t trust AI because we feel emotionally safe—we trust it because we can trace the logic (or at least, we should be able to). We trust it because, unlike humans, it doesn’t forget. It doesn’t retaliate. It doesn’t lie to protect its ego.

But AI is also, paradoxically, opaque. Most people don’t understand how it works. And when decisions are made—about hiring, promotion, workload—people want not just the right answer, but a human answer. One they can challenge. One they can talk back to.

So perhaps the most dangerous boss is not the human or the AI—but the system that erodes trust in both.

What would it take to build a leadership model where trust isn’t a gamble—but a shared responsibility?

Takeaways: Not Who’s Better, but What We’re Willing to Change

In the end, this isn’t a debate between humans and machines. It’s a mirror. If AI even appears to be a more trustworthy leader than the average human manager, then we have a bigger problem than technology. We have a cultural deficit in leadership. We don’t need to surrender the office to the algorithm. But we do need to raise our standards. What if we took the best of both—human empathy and AI consistency—and finally built leadership systems worthy of the people they serve?

Would you report to a well-trained AI? Or do you still believe in human leadership, if only we did it better?

Let me know what you think. And if you’ve had a horrible boss—or a quietly brilliant one—I’d love to hear your story!

Image credit: Nick Fewings

Research Highlights

These are the studies combining behavioural science and data science components, which caught my eye this week. Note that inclusion in this list does not constitute an endorsement or a recommendation. It is just something I found interesting to read.

Impact of abusive leader behavior on employee job insecurity: A mediating roles of emotional exhaustion and abusive peer behavior

Based on the social exchange theory, the present study aimed to investigate the association between abusive leader behavior and job insecurity while considering the serial intervention of abusive peer behavior and emotional exhaustion. Abusive leader behavior triggers abusive peer behaviors, emotional exhaustion, and job insecurity. Results from the data of 323 final responses indicated support for all the hypothesized relationships. Moreover, the findings also reported sequential mediation of abusive peer behavior and emotional exhaustion in the association between abusive leader behavior and job insecurity. The results indicate that mistreatment by an immediate boss can encourage peers to engage in similar unethical behaviors, leading to employees feeling emotionally exhausted, which ultimately results in job insecurity concerns. The study hopes that the findings will help practitioners dedicate more efforts to curtailing abusive behaviors that lead to several unintended consequences at work.

Leadership and fairness: The state of the art

Research in leadership effectiveness has paid less attention to the role of leader fairness than probably it should have. More recently, this has started to change. To capture this development, we review the empirical literature in leadership and fairness to define the field of leadership and fairness, to assess the state of the art, and to identify a research agenda for future efforts in the field. The review shows that leader distributive, procedural, and especially interactional fairness are positively associated with criteria of leadership effectiveness. More scarce and scattered evidence also suggests that fairness considerations help explain the effectiveness of other aspects of leadership, and that leader fairness and other aspects of leadership, or the leadership context, may interact in predicting leadership effectiveness. We conclude that future research should especially focus on interaction effects of leader fairness and other aspects of leadership, and on the processes mediating these effects.

When Your Boss is an Algorithm: The Effect of Algorithmic Management on Worker Performance

Artificial intelligence is becoming an integral part of the workplace and is increasingly used for managerial tasks (e.g., evaluating candidates, allocating assignments, assessing productivity). How do workers respond to algorithmic management? How does it affect their attitude and performance at work? To study these under-explored questions, we conducted a field experiment in an online labor marketplace where we randomly assigned 1,500 workers to either a human or algorithmic manager treatment and varied the type of interaction (positive vs. negative).Our results indicate that working under algorithmic rather than human management has substantial consequences on the way workers approach and carry out the job. Specifically, workers who receive positive feedback from human managers put significantly more care and effort into their tasks and perform more accurately, compared to workers who receive identical feedback from algorithmic management. Our findings underscore the important role of recognition by humans in motivating workers.

Image credit: Sander Sammy

Events and Opportunities

You may find the following events and opportunities of interest. Note that inclusion in this list does not constitute an endorsement or a recommendation.

You may find the following events and opportunities of interest. Note that inclusion in this list does not constitute an endorsement or a recommendation.

Events:

Vacancies:

Stratagem Research, UK

PolyAI, UK

Microsoft, USA

Google, USA

WWF-Australia, Australia

Image credit: Heather McKean

Your Feedback

Thank you for reading this week’s edition of Behavioural Data Science Week. Share your thoughts—or simply drop a "10" if this edition resonated with you.

Rean Da Costa

Behavioural + Data Scientist | AI / ML Engineer | AI Innovation at Bank of England | Delivered 80% Improvement in data-driven decision-making among FTSE 100 companies utilising behaviour science | Ex national advisor

3mo

Excellent insights. This reminds me of Nathan Brooks' 2016 study on psychopathy in business. They found that roughly 1 in 5 corporate executives display psychopathic traits, which is similar to prison populations but of course much higher than the general public (1 in 100). There's an interesting saying I came across that 'If you are a psychopath and born poor, you go to jail. However, if you are a psychopath and born into wealth, you go to business school!' :). Trust is also an interesting point. We readily place our trust in human leaders over AI, however the study also pointed out a blind spot during recruitment, ie we prioritise skills over character during the hiring process and hence its easy to end up with the bad behaviour contagion that spreads across the organisation. It would be interesting to see what a trusted combination of Ai and human capability would look like in practice

To view or add a comment, sign in

Explore topics