Scaling Dysfunction: What Happens When AI Mirrors Your Worst Habits 

Scaling Dysfunction: What Happens When AI Mirrors Your Worst Habits 

Your culture isn’t invisible anymore. It’s becoming executable code.  

What if your replica was doing exactly what it was told? Just not what you intended? This is the question too few boardroom teams are asking as they race to integrate AI. The corporate narrative is filled with urgency: "Deploy AI or fall behind." But here’s the deeper, quieter truth: AI won’t just accelerate your operations. It will accelerate your organizational logic, whether that logic is aligned or broken.   

AI doesn’t transform your culture. It scales it. And that’s the risk.  

Because if your company is already plagued by whispered dissent, closed-door decisions, and performative collaboration, AI will encode those patterns and execute them with surgical precision. You won’t see resistance in the meeting room. You’ll see it in your data flows. In opaque recommendations. In subtle biases that quietly tip the scales, because the system learned from you.  

As Dr. Ayanna Howard’s (Dean at Ohio State) research on human-AI interaction and how bias and ethics shape intelligent systems warns that bias in AI is not just about data, it’s about design, intent, and oversight. In other words, when AI mirrors human systems, it doesn’t just reflect numbers; it reflects values, blind spots, and behavioral defaults. Her research shows that people often trust AI not because it’s right, but because it feels right - especially when it confirms their existing beliefs. That’s exactly why organizational culture matters more than ever. If your leadership tolerates silence, shortcuts, or bias, your AI won’t challenge that. It will double down on it. Quietly, invisibly, and at scale.  

Culture failure doesn’t whisper anymore. It roars.  

We’ve seen it on every stage:  

  • Wells Fargo: where internal warnings were ignored until $3 billion in fines made it clear. They didn’t have a fraud problem. They had a silence problem.  

  • Uber: where hypergrowth buried harassment claims until the culture imploded and took $20 billion in valuation with it.  

  • Boeing: where speed trumped safety, and 346 people died because engineers were overruled, and doubts were dismissed.  

  • Wirecard: where culture protected power instead of truth, and €1.9 billion evaporated. 

  • Tesla: where productivity broke records, but so did employee burnout, lawsuits, and turnover. 

Article content
Exhibit 1: Impact of Toxic Culture

Build before you scale  

Can AI improve culture? Sure! If the culture is ready for it. Studies from MIT Sloan and BCG confirm it: in high-trust, feedback-rich teams, AI adoption isn’t just smoother; it’s smarter. Better outputs. Faster learning. Stronger decisions. In these environments, AI becomes a mirror, a coach, even a multiplier of reflection. 

But here’s the catch: only after you’ve done the human work. 

You can’t expect AI to govern behavior if your team is still afraid to speak up in a meeting. A Culture Operating System (CultureOS) is not a feel-good concept. It’s an infrastructure, just like your financial system. As Lindsay McGregor, co-author of Primed to Perform, has shown, culture is measurable, manageable, and directly tied to performance. The most adaptive organizations don’t leave it to chance. They design it, measure it, and own it.  

Here’s what to build: five protocols to run your human system like your most critical infrastructure:  

Step 1. Codify the source code  

“Collaboration” isn’t valuable unless you can see it. Translate your values into observable, trainable behaviors. That means people know what “good” looks like in practice.  

Install:  

  • Behavioral scorecards in performance reviews (e.g., how well someone shares knowledge, not just what they deliver)  

  • Values-based interview prompts (e.g., “Tell me about a time you disagreed and still helped your colleague succeed.”)  

  • Embedded behaviors in rituals (e.g., shout-outs for cross-functional wins in stand-ups)  

Best Practice:  

Netflix famously links values like “candor” and “judgment” to specific behaviors in its performance framework: "You share information openly, proactively, and respectfully." These show up in hiring, firing, and promotion decisions.  

Step 2. Make feedback a system, not a personality trait  

Feedback shouldn’t depend on courage. It should be a normal, recurring structure designed to surface dissent early and make improvements safe.  

Install:  

  • Weekly feedback loops (like Friday learning rounds)  

  • Anonymous feedback tools (Pulse, Leapsome, Officevibe, Culture Readiness Check)  

  • Dissent dashboards that track how often feedback is raised, acted on, and closed  

  • Retrospectives that include behavior-based KPIs (e.g., “How well did we live our value of transparency this sprint?”)  

Best Practice:  

Bridgewater Associates runs radical transparency through structured feedback tools and recorded meetings. People rate each other in real time, and feedback is built into daily operations.  

Step 3. Build human-AI decision protocols  

AI shouldn’t be a black box. It should operate under explicit decision logic that’s visible to teams.  

Install:  

  • AI-Human Decision Matrices (e.g., customer support: AI auto-responds to tier 1, human escalation for tone or nuance)  

  • Escalation paths for ethical flags (e.g., bias spotted in hiring recommendations)  

  • Audit trails for every high-stakes AI-influenced decision (e.g., budget forecasts, hiring, terminations)  

Best Practice:  

Salesforce’s office of ethical and humane use of technology created protocols for when humans must step in, including reviewing hiring algorithms and marketing AI output for equity issues.  

Step 4. Run Cultural Debugging Like Cybersecurity  

You audit your finances. You stress-test your systems. But culture? Most companies still wait until people burn out or resign. That’s reactive. And expensive. Culture needs its own audit loop. One that surfaces dissonance before it calcifies.  

Install:  

  • Values-to-behavior scans (e.g., values show up in daily actions vs. just on posters)  

  • Dissent and silence metrics (e.g., how often people challenge ideas or hold back)  

  • Cultural heatmaps by team (e.g., who reinforces culture vs. who quietly drifts) 

Best Practice:  

Allianz. The insurance giant runs regular culture diagnostics across global units, capturing feedback on leadership trust, decision clarity, and ethical pressure. Their “Allianz Engagement Survey” feeds into business unit planning and leadership KPIs. If a red flag shows up? Leaders are expected to act. Immediately.  

Step 5. Deploy Leaders Like Models

Leaders don’t just set directions. They encode norms. How they handle disagreement, make trade-offs, or admit fault will get mimicked by teams and systems.  

Install:  

  • Decision journals (leaders publicly write their assumptions and learnings)  

  • Trade-off visibility in meetings (“Here’s what we’re prioritizing and what we’re giving up.”)  

  • Walkthroughs of reasoning aloud to teams (so logic becomes shared, not just followed)  

Best Practice:  

Amazon’s “Narratives over slides” policy forces leaders to document the thinking behind decisions, including risks and ethical concerns. It creates institutional memory and accountability.

Article content
Exhibit 2: The Cultural Change Process

 The Real Question: What are the worst behaviors are you scaling?  

Stop treating culture like something optional, personal, or soft. That mindset belongs in a dusty HR manual from the 1970s, not in a modern boardroom. In an AI-first world, culture is your source code. If it’s biased, broken, or misaligned, AI won’t slow you down, it’ll scale the very thing you should’ve fixed first.   

Install the CultureOS. Not because it’s nice. Because it’s non-negotiable.  

If this hits home, I want to hear from you:  

→ What culture work are you doing alongside your AI roadmap?  

→ Drop your insights. Tag a leader who needs this.  

→ Share this with your team before your next AI meeting.  

Because culture isn’t soft stuff. This isn’t just a tech strategy. It’s your future workplace - coded, scaled, and locked in.  

Alan Wang

Senior Consultant @ May& Co. | Elevating Culture as a Strategy | MBA

2w

We have been thinking a lot around how to operationalize culture, to make the "invisible" culture visible, so that leaders and teams work under the alignment of shared purposes, values, and business goals

To view or add a comment, sign in

Others also viewed

Explore topics