LinkedIn and 3rd parties use essential and non-essential cookies to provide, secure, analyze and improve our Services, and to show you relevant ads (including professional and job ads) on and off LinkedIn. Learn more in our Cookie Policy.
Select Accept to consent or Reject to decline non-essential cookies for this use. You can update your choices at any time in your settings.
"Overall, the performance of our 'synthetic sample' is too poor to be useful for all of our research questions...Further, we have demonstrated synthetic samples generate such high errors at the subgroup level that we do not trust them at all to represent key groups in the population." Ell. Oh. Ell.
https://lnkd.in/ebajB7Hq
Political Technology Leader | Making Emerging Tech Work for Democrats
LLMs 👏 do 👏 not 👏 think 👏 like 👏 people. Political folks, remember this the next time you're pitched on "simulated" focus-group or polling response tools.
You aren't getting an answer to your question. You're getting a system that looks for old focus group clips it thinks are *similar*, then plays back the recording it thinks fits best.
https://lnkd.in/ewuqUWey
Why do folks tend to put the 'TL;DR' at the end of what they've written?
I can't remember where I saw this, a film, a show, but it seemed good advice at the time:
✅ Tell them what you're gonna tell them.
✅ Tell them.
✅ Tell them what you told them.
I think it was a legal show, and it was advice from an experienced old hand to someone making their first case in court. (It's also been attributed to many great minds, including Aristotle.)
Since then, I've encountered it a few times; as a presentation tool, use as a thesis preparation tool, countless others.
Thing is, the idea that you can bury the summary of your ramblings in the middle, or at the end of them (as I've just done), is ridiculous, and so pre-AI.
The internet, and its readers (user intent being all powerful with the advent of LLMs, and GenAI as 'search engines'), has moved on. Move with it.
𝗕𝗶𝗮𝘀 𝗶𝗻 𝗔𝗜: 𝗪𝗵𝘆 𝗜𝘁’𝘀 𝗠𝗼𝗿𝗲 𝗖𝗼𝗺𝗽𝗹𝗶𝗰𝗮𝘁𝗲𝗱 𝗧𝗵𝗮𝗻 𝗜𝘁 𝗟𝗼𝗼𝗸𝘀 (𝗮𝗻𝗱 𝗛𝗼𝘄 𝗮 𝗠𝗶𝘅𝘁𝘂𝗿𝗲 𝗼𝗳 𝗩𝗼𝗶𝗰𝗲𝘀 𝗖𝗼𝘂𝗹𝗱 𝗛𝗲𝗹𝗽)
When people talk about “bias in AI,” it often gets framed like a bug: find it, patch it, done. But bias doesn’t work that way. It’s layered, contextual, and often subjective.
Some phrases are clearly sensitive. Others are “dog whistles”: ordinary-looking language that carries hidden meaning in certain contexts. Researchers like Kruk et al. (2024) have cataloged thousands of examples, while Sasse et al. (2024) showed how new dog whistles emerge in online spaces faster than lexicons can keep up. Add fuzzy semantic matching — where embedding models collapse distinctions between “close enough” queries — and the problem gets trickier still.
The harder question is: what do we even mean by bias? For one person, objectivity means sticking to bare facts. For another, it means balancing perspectives. For someone else, it might mean optimizing for creativity or contrarian analysis. When we say we want AI to be “unbiased,” we’re usually asking it to reflect our own preferences.
Classic work like Caliskan, Bryson & Narayanan’s WEAT study (2017) showed that even broad word embeddings replicate human stereotypes. Ferrara’s 2023 survey catalogs how bias arises not just from data but also from design and deployment. Even model architectures matter: Chung et al. (2024) found that gating in Mixture of Experts (MoE) models embeds its own biases.
So maybe the real challenge isn’t eliminating bias — it’s making it visible and navigable.
That’s the motivation behind a project I’ve been working on: Mixture of Voices. Instead of pretending one model can be the “neutral” voice, it routes across multiple AI systems (Claude, ChatGPT, Grok, DeepSeek, etc.) and explains why decisions are made. If a safety rule triggers, you see it. If a model is chosen for math, it tells you. The system surfaces trade-offs (safety vs. performance, precision vs. recall) and lets users steer according to their own definition of objectivity.
Bias isn’t a bug to squash. It’s a set of editorial decisions that should be transparent and user-configurable.
So I’ll leave you with a question: would you rather use an AI that claims to be neutral, or one that admits its biases and gives you the steering wheel?
Want to see what I have been up to regarding optimizing AI model selection (and helping address bias and transparency along the way)? See my open source Mixture of Voices project at https://lnkd.in/e2j7cyJn
A very quick demo can be see at: https://lnkd.in/eY7z73rN#ai, #artificialintelligence,#openai,#claude,#grok,#groq, #deepseek,#opensource
Interesting read. Agree that everyone’s crowding upstream: NVIDIA, foundation models, the obvious infrastructure plays. Feels late, upside is capped, and will likely be a race to the bottom for many players who don’t get out early.
The earlier opportunity? Fishing downstream (or at least anywhere other than upstream). Integrating AI into every workflow, business and personal. That’s going to take years of iteration and create wave after wave of new value.
I'm personally betting on value creation for marketers, now that ideas are easier than ever to build into personalized customer experiences.
One nit with the article: Most of the article focuses the value that investors can capture. That seems narrow. AI can (and should) create broad value for the world. The best companies will do both, but even if some just make work and life better without minting billionaires, that’s still a win.
This article is funny, sharp, and slightly painful.
But it might make you rethink what “winning” looks like.
Smart read: https://lnkd.in/gDHNNbvf
🔥 Tired of running out of premium AI prompts?
I’ve built something new to make working with images and AI a whole lot easier → https://answermyimage.com/
With AnswerMyImage, you can:
✅ Upload any image
✅ Instantly scan & extract the text
✅ Send it directly to ChatGPT, search with Google, or copy to your clipboard — all in one simple motion.
No more typing everything out manually. Just upload, extract, and go.
I’d love for you to give it a try and let me know your thoughts!
👉 https://answermyimage.com/#AI#Productivity#WebsiteLaunch#AnswerMyImage
✨ Can I say something?
I don’t really like listening to audios generated by NotebookLLM 🎧🤖 — and that’s despite being an early adopter of the tool.
I would listen if I knew exactly what the input was 📝➡️🔊. But I also worry about hallucinations 😬.
For example, if I land on a blog post and see AI audio at the top, I usually ignore it 🚫👂 because:
👉 I don’t know the input
👉 Potential hallucinations
So, if this is something you want to use, I’d recommend writing a quick summary 📌 of the input documents and clarifying whether the audio was human-reviewed 👩💻✅.
Just a thought 💡.
Okay, here's a video script for "The Psychology of Impulse Buying: Why We Do It and How to Stop," designed with elements that would work well for an AI video generation tool like InVideo (or similar platforms that combine stock footage, text overlays, and voiceovers).
Video Script: The Psychology of Impulse Buying
Target Audience: Individuals struggling with unplanned spending, those looking to improve their financial habits. Tone: Empathetic, informative, empowering.
Hi Humaniser! 👋
You’ve probably seen the word humaniZer used in tech — a way to make AI sound more… human.
But I think it’s time to give that word back to the people.
To the ones who challenge what no longer serves us.
Who bring humanity back into work.
Who know performance and wellbeing aren’t enemies — they’re teammates.
Not bots.
Not algorithms.
Just humans — real ones — doing real work, with real heart.
💛 Are YOU a Humaniser?
I talk about what it really means (and why it matters) in my latest video on YouTube.
🙃 Confession: I hate troubleshooting.
Some folks see it as a fun puzzle. Not me. I’d rather skip straight to the part where things work.
But right now, I’m building an AI integration in Storyline. I hit publish, crossed my fingers… nope, didn’t work. Classic.
As much as I dislike troubleshooting, I’ve picked up a few tricks that help me push through (and maybe they’ll help you too):
🔧 My Troubleshooting Tips for the “Not-a-Puzzle” People
1️⃣ Check your trigger order. If you’ve got a bunch, add a small delay (like starting at .01 seconds). (Stolen from Olivia Lucy, M.A.)
2️⃣ Test in different contexts...Preview vs. Published, or even different browsers.
3️⃣ Isolate variables. Change one thing at a time so you know what’s causing the issue.
4️⃣ Copy the slide/layer. Weirdly enough, sometimes a fresh copy works.
5️⃣ Make the invisible visible. If something’s firing behind the scenes, find a way to show it on screen. (Stolen from a session with Noah Mitchell)
6️⃣ Fresh eyes. Step away or grab a colleague... trenches = tunnel vision.
I still don’t like troubleshooting, but these keep me from banging my head against the desk (most of the time).
What’s your go-to troubleshooting trick? 👀
#InstructionalDesign#Storyline#Elearning#AIinLearning#Troubleshooting
FOBO (Fear of Becoming Obsolete) is a real thing. Remember that AI can never replace your story because we are all on individual journeys. It's your life story that gets you hired and makes people want to be with you. A resume (AI generated or otherwise) can only get you an interview and make you a person of interest to be investigated. Embrace AI to enhance you because it simply cannot replace you.
https://lnkd.in/gTHuFNcP
Dinesh Ramasvamy MBA called this an “interesting read” and it is. I read it closely and I ran it through Verity (my AI co-author) to sharpen some points. Here’s where I see real gaps worth calling out:
1. Agentic AI at scale
• Article claim: Deployment is “increasing across fields like software engineering and customer service,” with the potential for a “superhuman workforce.”
• Reality: No concrete enterprise-wide examples are provided. Outside of pilots and heavily supervised workflows, we’re nowhere near true scale. The article’s own wording (“deployment is increasing”) is descriptive, not proof of scaled, autonomous production systems.
2. Who crowns the “experts”?
• Article claim: The authors say they convened “an international panel of AI experts.”
• Reality: Nowhere do they operationalize what “expert” means. Are these people building and running agentic systems in real production, or advising and debating hypotheticals? The phrase “panel of AI experts” is presented as authority without methodological detail.
3. AI theater vs. AI reality
• Article claim: A majority argue management must be reimagined for a rapidly changing “superhuman” workforce.
• Reality: Much of what’s labeled “agentic AI” today reads as glorified workflow automation: useful, yes, but not yet a superhuman workforce. Even the article contains dissenting voices on this point, which undercuts a one-size-fits-all alarmism.
Bottom line
I’ve worked in tech my whole career. I’m excited about AI’s potential, and I’m also skeptical of hype dressed as progress. If we want to manage this responsibly, we need less buzzword inflation and more sober evidence: definitions, concrete case studies, and clear methodology. I discussed this piece with Verity to make sure my critique was tight and grounded. That’s how I like to use AI: as a force-multiplier for careful analysis, not a substitute for judgment.
Grace and Peace,
—Marc
Boston Consulting Group (BCG)MIT Sloan Management Review#AgenticAI#ResponsibleAI#Leadership