"Talk to your users before building anything" I used to treat this advice as the gospel, but in the AI age, everything is so novel that no one knows what's possible, and therefore have no idea what to ask for. Before ChatGPT no one was asking for an all-knowing AI, they just bashed their head on Google until they got what they wanted. Another quote that sums this up is "If we asked people what they wanted, they would've said faster horses". So is the solution just to build? Not exactly. For Check, we wondered why parents and students were paying copious amounts for tuition, despite a PhD-level answering machine at their fingertips. We realised students needed an AI that teaches, not just answers. And one designed for the rigour and specificity demanded by their syllabi. So find a market that's underserved by current solutions, and don't be afraid to build something people didn't know they needed. Sometimes the best products solve problems users couldn't even articulate.
"Building AI: Listen to users, but also innovate"
More Relevant Posts
-
Have you ever glanced over at your neighbor's computer on a flight? I was talking to someone yesterday who did exactly that. They noticed the guy next to them was chatting with AI. But not for code. Not for research. Not for strategy. He was just trying to figure out how to say NO without hurting someone’s feelings. That was it. 2 years ago, when GPT-3.5 dropped, I built a tool called SayItKindly. You’d paste your message, and it would rewrite it to sound more thoughtful. But it flopped. Not because it wasn’t useful. But because it was easier to just ask ChatGPT. Here we are in 2025, and nothing's changed. We’ve got the most powerful models on the planet. And what are people using them for? - How to say no. - How to be more polite. - How to sound less frustrated in an email. It’s easy to laugh at that, but it’s also deeply human. People don’t want to burn bridges. They just want to be understood. They want to be liked. It is good to know that AI is helping people foster better human connections.
To view or add a comment, sign in
-
-
I thought I knew ChatGPT, until I couldn’t explain it simply. I get great results, but without understanding, I’m leaving value on the table. So I took the 10 most-watched YouTube videos on LLMs and GenAI and fed them into NotebookLM. The video below is one of the fun outputs from that research. It genuinely leveled up how I use these models. Inside the video: - Why an LLM is a “stochastic parrot” (and what stochastic actually means). - How LLMs are trained, in plain English. - The transformer architecture, minus the jargon. - The “finishing school” that turns a raw model into a helpful assistant. - Why an LLM is basically Einstein sitting on your desk. If this helped, please like and comment so I can make more on the topics you care about. What part of LLMs should I unpack next? #AI #GenerativeAI #MachineLearning #PromptEngineering
To view or add a comment, sign in
-
In 10,000 years of human history, there have been maybe a dozen moments that changed everything. Most happened in the last 100 years. We're living through another one right now. And we're all acting like complete hypocrites about it." 🍄"An NYU professor just AI-proofed his assignments. You know what happened? 🍑 Students complained the work was 'too hard.' 🍑 One asked for an extension because ChatGPT was down that day. 🍑 Another literally said: 'You're asking me to go from point A to point B, why wouldn't I use a car to get there?' When asked about their largely AI-written work, a student shrugged: 'Everyone is doing it.'" I'm reading this whole thing, shaking my head and laughing, nearly spitting water all over my keyboard... while Claude helped me structure my latest research, ChatGPT cleaned up my article drafts, and I haven't written a conclusion without AI feedback in months. We've all become the students in this story. We just don't want to admit it. 🍤 What makes this funnier (and scarier) is that I spend my days researching AI, writing about its implications, studying human-AI interaction... While simultaneously being unable to finish a paragraph without asking AI to make it 'flow better.' We've officially become the species that needs AI to complain about AI dependency. P.S. This post was human-written* *With AI assistance, editing, and a grammar check because I'm exactly who I'm talking about. #AI #NCI #HumanBehavior
To view or add a comment, sign in
-
-
WRONG! Want to show your kids why they shouldn’t blindly trust AI? Show them this prompt... They learn a little, toss the rest to ChatGPT, and boom, suddenly they’re “experts.” But really, they’ve swapped curiosity for convenience. They stop learning, stop questioning… and never actually get good. AI is wrong a lot. If you don’t know enough to push back, you’re letting a machine make decisions for you and that can backfire, big time. AI was trained on tons of “perfect” clock photos, you know, the classic 10:10 pose because it looks pretty. So when you ask for a clock showing 4:30, it might just make a funny mistake, but also a perfect lesson :) This is the kind of eye-opener kids need: AI is awesome when you have knowledge, but dangerous when you don't. Don’t let AI be the end of curiosity, Stay WIZER!
To view or add a comment, sign in
-
-
"Like students facing a hard exam question, large language models sometimes guess when its uncertain, producing plausible yet incorrect statements instead of admitting uncertainty,” according to OpenAI. The researchers acknowledged that LLMs will always produce hallucinations due to fundamental mathematical constraints that cannot be solved through better engineering. They observed that “Such ‘hallucinations’ persist even in state-of-the-art systems and undermine trust.” My reaction? Well, duh. Anyone who has worked with AI knows it hallucinates. My image prompt for this story was “young student holding a pencil daydreaming while taking a test in an old classroom from the 50s.” Midjourney got the pencil right but dressed my 8-year-old like he was off to prom after guessing his way through his math test. And it is not just pictures. Over the weekend, I spent an entire day trying to get ChatGPT to build a pro forma. I thought I could take a shortcut. I could not. The results were so inconsistent that I gave up and built the spreadsheet from scratch. The researchers concluded, “Unlike human intelligence, it lacks the humility to acknowledge uncertainty,” said Neil Shah, VP for research at Counterpoint Technologies. “When unsure, it doesn’t defer to deeper research or human oversight. Instead, it often presents estimates as facts.” So, can you train AI to stop guessing and admit when it does not know the answer? Based on this research, the answer is no. You still need an adult to grade the test and explain what went wrong. That said, there is a path forward. While public models like ChatGPT can be unreliable, private GPTs built on your organization’s own data and guardrails can reduce hallucinations and stay grounded in reality. They do not guess outside what they know. If you want to explore how that works in a secure and meaningful way, let’s talk. Contact Michelle Fink at Hudson technology. #MSP #MSSP #AI #LLM
To view or add a comment, sign in
-
-
Why do LLM lie from time to time? Thanks for asking: When researchers test language models, they use benchmarks that work like multiple-choice exams. Get it right? One point. Say “I don't know?” Zero points. Result? Guess wildly and maybe get lucky? You might score! Think about it like a student facing a pop quiz. If leaving an answer blank guarantees failure but guessing gives you a 1-in-365 chance of nailing someone's birthday, what would you do? You'd guess. That's exactly what ChatGPT does. I've taken this answer on #theneuron, a great newsletter you should subscribe to.
To view or add a comment, sign in
-
My nieces and nephew were at my place this week — two 16-year-olds and a 13-year-old. I asked one question: How do you use ChatGPT? Their answer: it helps sometimes, but it can’t replace thinking. They know over-reliance will leave them behind. Banning AI feels tidy. It also won’t teach a child how to think with tools they’ll use for the rest of their lives. Schools that initially banned ChatGPT have reversed course after deciding the tool must be managed and taught, not hidden. Forbes There are real reasons to be cautious: recent research from MIT’s Media Lab suggests habitual LLM use for essay writing is associated with weaker neural engagement and lower measures tied to memory and creativity — an early warning about outsourcing thinking as a habit. arXiv+1 I wrote a short piece on why banning isn’t the answer and what practical steps schools, teachers and parents can adopt instead. Click here to read more: https://lnkd.in/gsrUX9_c What would you tell a teenager about using AI? Comment below — I’ll share the best 3 replies in a follow-up post. — Alexie O’Brien [AOB] #AIinEducation #TeachingWithAI #CriticalThinking #DigitalLiteracy #SchoolLeadership #FutureOfWork #EdTech
To view or add a comment, sign in
-
I love this reflection from Alexie O'Brien GAICD — it highlights the black-and-white approach some schools are still taking in a moment that is, arguably, speckled with color. We won’t stop the rapid AI adoption happening across industries, but we can prepare kids for the working world they’ll enter. 🧡 Pause — understand your purpose for using AI 💚 Align — ensure use connects with school values 💙 Trust — build habits for evaluating and trusting AI output 🩷 Hone — continuously improve your practice Preparing kids begins with us. That means curiosity, a willingness to step outside our comfort zones, and the courage to set intentional, explicit guardrails. 💬 How is your school moving beyond “black and white” thinking when it comes to AI?
Chair ✦ NED ✦ Board Advisor ✦ AI Consultant helping organisations excel with AI✦ Leadership and Executive Coach ✦
My nieces and nephew were at my place this week — two 16-year-olds and a 13-year-old. I asked one question: How do you use ChatGPT? Their answer: it helps sometimes, but it can’t replace thinking. They know over-reliance will leave them behind. Banning AI feels tidy. It also won’t teach a child how to think with tools they’ll use for the rest of their lives. Schools that initially banned ChatGPT have reversed course after deciding the tool must be managed and taught, not hidden. Forbes There are real reasons to be cautious: recent research from MIT’s Media Lab suggests habitual LLM use for essay writing is associated with weaker neural engagement and lower measures tied to memory and creativity — an early warning about outsourcing thinking as a habit. arXiv+1 I wrote a short piece on why banning isn’t the answer and what practical steps schools, teachers and parents can adopt instead. Click here to read more: https://lnkd.in/gsrUX9_c What would you tell a teenager about using AI? Comment below — I’ll share the best 3 replies in a follow-up post. — Alexie O’Brien [AOB] #AIinEducation #TeachingWithAI #CriticalThinking #DigitalLiteracy #SchoolLeadership #FutureOfWork #EdTech
To view or add a comment, sign in
-
🤖📚 𝐖𝐡𝐞𝐧 𝐘𝐨𝐮𝐫 𝐂𝐨𝐥𝐥𝐞𝐠𝐞 𝐊𝐢𝐝 𝐁𝐞𝐜𝐨𝐦𝐞𝐬 𝐘𝐨𝐮𝐫 𝐀𝐈 𝐓𝐞𝐚𝐜𝐡𝐞𝐫 My son just schooled me in AI—and I run an AI org. Back in 2023, my college-going son bought a ChatGPT subscription. I rolled my eyes. “Another shiny new toy,” I thought. Fast forward two years: I’m building my healthcare startup, he’s juggling three ventures, and somehow we end up side by side—bonded by AI. I’ve live and breathe AI for work—but watching him use ChatGPT, Claude, Gemini, Cursor, and multi-agent setups like they were second nature? Mind officially blown. He taught me: ✨ Label GPT conversations, and it becomes a memory assistant. ✨ Why multi-agent and Agentic AI setups aren’t geeky experiments—they actually speed things up. ✨ How to stop treating ChatGPT like “Google with a keyboard” and start using it as a collaborator (yes, even it can be bossy). Soon, we were swapping AI hacks like trading cards—his trading experiments, my healthcare prototypes. That’s when it hit me: my son had become my teacher. 𝑆𝑎𝑚 𝐴𝑙𝑡𝑚𝑎𝑛 said no group embraces AI more than college kids—and he’s right. They breathe it, play with it, and make it part of everything (https://lnkd.in/etChrQzu) So here’s to 𝐑𝐨𝐦𝐢𝐫 —thank you for reminding me that learning AI is as much about curiosity and play as it is about scale and systems. Claude may still code like an overconfident intern—but it makes us faster, bolder, and way more creative. For those without a nightly AI crash course from a college kid, check out CrewAI’s multi-agent systems course (https://lnkd.in/eG55t_6r) —it overlaps a lot with what I’m building in my startup.
To view or add a comment, sign in
-
"Maybe the real danger isn’t that AI gives us answers, but that we stop learning how to wrestle with the questions."
Curator, Writer and Coach 🌟 Founder, The School of Experiences 🎙️ Curator, Futures | BLR Hubba 🧲 Content + Community, CCBP at IIM Ahmedabad
My husband is not on any social media platform, hasn’t used ChatGPT in his life and his dream is to live on a beautiful island with landline telephone. We were out for a gathering and in that conversation, our Gen X uncle was mentioning how he is on Perplexity Pro, and how everything you want to know is easily accessible. He was mentioning that in his IIT exams, the open book papers were the most difficult. And my husband said, yes, that’s because all the answers look equally right. And the learning is not in the answer, but in the process of getting to it. And one has to go through the friction of learning to truly realise the deeper insight. True wisdom was never about finding the answer quickly, but about becoming the kind of mind that can hold uncertainty deeply. And that’s what we missed with gpt. There is no friction, there is no self-discovery, there are no ahas. There is a serious dearth of critical thinking. It is scary and worrisome at the same time. Maybe the real danger isn’t that AI gives us answers, but that we stop learning how to wrestle with the questions.
To view or add a comment, sign in
Founder @ High Five | Helping tech companies hire and pay teams across Southeast Asia 🚀
3wAnother quote I love is "the mind not know what the tongue wants" - fried chicken and waffles doesn't make sense but it's delicious. This is relevant to building product. Users often don't know what solutions they want or need. Product discovery should be focused around identifying problems, pain and willingness to pay.