THE AI REVOLUTION: CENTERING HUMAN DIGNITY IN A DATA-DRIVEN HEALTHCARE FUTURE
We are living through a technological shift as disruptive as the automobile. In health care, AI holds enormous promise—but only if we apply it with purpose, grounded in dignity, and in service of real human outcomes.
Carriages, Cabs, and a New Kind of Revolution
I've recently been drawn in by the HBO series The Gilded Age. It's a lush portrayal of ambition, power, and change at the dawn of modern America. But what struck me most wasn't the drama. It was the detail. The image of New Yorkers hailing horse-drawn taxis reminded me that, not long ago, the entire transportation system was powered by horses and steam.
For centuries, people traveled by carriage. An entire industry of stable workers, feed suppliers, blacksmiths, and barn owners was established to care for and utilize horses. Then came the automobile. Almost overnight, what was once indispensable became obsolete. Cities were redesigned. Labor markets shifted. Human habits changed. Not because we planned for it, but because technology demanded it.
That moment feels eerily familiar today. We are living through another turning point. This time, it may be even more transformative than the automobile or the industrial revolution itself. That force is artificial intelligence.
The Power Is Real But Misunderstood
Let me be clear. AI can revolutionize health care. It has already been implemented in many parts of the health care system. According to a 2024 Deloitte survey, 80 percent of hospitals now use AI to improve patient care and workflow efficiency. Meanwhile, the global market for AI in health care is projected to exceed $120 billion by 2028 (Deloitte, Blue Prism). Think of wearables providing real-time vitals or LLMs helping clinicians draft notes or clarify complex medical instructions in plain language. But here's the problem. Most people are either reading headlines or experimenting only casually with free versions of AI tools. But unless you are consistently using advanced platforms such as ChatGPT, Gemini, or other AI assistants as part of your daily workflow, it is difficult to fully grasp how deeply this technology is already reshaping the way we work and live.
Those of us who are actively using this technology every day see the potential. We also see the limitations. Many of the AI-powered tools being built right now are, quite frankly, not defensible. They are solutions in search of a problem, or worse, solutions that ignore the human context entirely.
Defensibility Requires the Human Layer
Most of what I've seen being built with AI in health care is not defensible. Despite the proliferation of health-focused AI startups, a 2023 study by Rock Health found that over 60 percent lacked a clearly defined path to clinical integration or meaningful outcomes, raising questions about sustainability and defensibility (Rock Health). There is a rush to build chatbots or LLM-powered tools that mimic the capabilities of general-purpose models like GPT-4. The real question is not whether we can collect or summarize data. The question is what we do next. What outcome does it provoke? What human action does it enable?
For instance, imagine a patient living with diabetes. The technology already exists for that patient to wear a glucose monitor, have a real-time conversation with an AI assistant about their meals, and prompt a human follow-up when patterns shift. Over 400 million wearable health devices are projected to be in use globally by 2026 (Statista), and platforms combining this data with conversational AI are beginning to reshape chronic care management. Low-income individuals may not want to fill out forms, but they will talk. The AI's role is to capture the data. The human’s role is to interpret it and act.
Most Builders Are Missing This
Too many startups are building around the data rather than what the data enables. A good AI tool is not just a dashboard or chatbot. It is a tool that prompts real-world human engagement. It is a system that understands care is relational, not just transactional. The AI is not the end. It is the beginning.
And right now, the base technology from OpenAI and others already gives you most of what these new tools are claiming to offer. What differentiates a good product from a great one is not a new layer of code. It is a layer of trust, responsiveness, and actionability that centers the human being.
If We Get It Right
If we build with the right lens —human-centered, culturally responsive, and data-smart — AI will help us tackle some of our most persistent health challenges. Health disparities. Chronic conditions. Behavioral health. Burnout among providers. Analysts estimate that AI could reduce annual U.S. health care costs by up to $360 billion if applied effectively to administrative workflows and clinical decision support (McKinsey). It can be the infrastructure we have long needed to support care, not just streamline it.
But to get there, we must resist the hype and ask harder questions: What outcome does this tool provoke? Who is interpreting the data? What is the next step in care this unlocks? Is the technology in service of the human or trying to replace them?
In my most recent conversations about the future of health care, I keep returning to one principle: dignity. Whether through wearables or workflows, we must design systems that see the whole person. The future is not just about what AI can do. It is about what we choose to do with it, and how we center the human in every interaction.
For me, dignity means seeing the person in front of you. It means honoring their humanity and not ignoring the reality of what life looks like for them. For our health systems to be truly effective, they must be able to do that. This, in essence, is what dignity looks like in health care. And this same principle must guide how we incorporate AI into the delivery of care.
This essay was inspired, in part, by a recent conversation with my colleague, Ramon Llamas. We connected to exchange ideas about how technology can be used to support low-income and Medicaid patients, and the reflections from that dialogue helped shape several of the perspectives shared here.
Human | Writer | Speaker | CRE & Tech Enthusiast | Independent Advocate
3wI love that you have spoken up about these critical issues. Thank you, Kevin!
Executive Director at Netcentric Campaigns
1mo"..it must provoke action, and that action must be human. " We also have to be very conscious that the places that are developing the AI need to be protected. https://www.sustainableaicoalition.org/coalition/ and a look at the energy infrastructure such progress will demand in the US alone. https://www.fractracker.org/2025/07/national-data-centers-tracker/ Unfortunately, right now it seems like the biggest set of actions AI is inspiring are greed and disconnection.
Behavioral Scientist & Behavior Design Expert | Fractional Chief Experience & Engagement Officer
1moAmen!
12M+ Impressions | Counseling Psychologist | Psychotherapist | CEO, Mindful Expression | Trauma-Informed Care Advocate | RCI Licensed | NIPCCD Alumna | APA Member 🌟
1moSuch an important reminder about keeping humanity at the heart of progress. The focus on "dignity, care, and better health" is spot on!
Award-Winning GP & Founder |Author| Championing Health Literacy for a Healthier Tomorrow| Empowering Prevention Through Health Literacy
2moWe must never forget that human connection comes first Kevin Dedner, MPH!