The Powerful Potential of Social Networks to Diagnose Disease – but at what cost?

The Powerful Potential of Social Networks to Diagnose Disease – but at what cost?

By: Dr. Sam Volchenboum

Dr. Sam Volchenboum is the director of the Center for Research Informatics at the University of Chicago, a board-certified pediatric hematologist and oncologist, and the cofounder of Litmus Health, a data science platform for early-stage clinical trials.

. . .

The world is becoming one big clinical trial. Humanity is generating streams of data from different sources every second. And this information, continuously flowing from social media, mobile GPS and wifi locations, search history, drugstore rewards cards, wearable devices, and much more, can provide insights into a person's health and well-being.

It’s now entirely conceivable that Facebook or Google—two of the biggest data platforms and predictive engines of our behavior—could tell someone they might have cancer before they even suspect it. Someone complaining about night sweats and weight loss on social media might not know these can be signs of lymphoma, or that their morning joint stiffness and propensity to sunburn could herald lupus. But it’s entirely feasible that bots trolling social network posts could pick up on these clues.

Sharing these insights and predictions could save lives and improve health, but there are good reasons why data platforms aren’t doing this today.

The question is: do the risks outweigh the benefits?

A Thought Experiment

Although social media platforms get press for being useful in predicting, and possibly preventing, suicide, the possibility that those platforms could see into the future before a patient has even visited the doctor is, for now, hypothetical. But it’s not far-fetched.

For instance, a data set consisting of social media posts from tens of thousands of people will likely chronicle the journey that some had on their way to a diagnosis of cancer, depression, or inflammatory bowel disease. Using machine-learning techniques, a researcher could take those data and study the language, style, and content of those posts both before and after the diagnosis. They could devise models that, when fed new sets of users’ data, could predict who will likely go on to develop similar conditions.

Our digital trail leaves many clues, both subtle and overt, to our overall health and well-being. How we use those data for good is another issue.

As a clinician, I support integrating data and putting the troves of information to use for society’s benefit. One of the reasons I co-founded Litmus Health, a data science company, was to help researchers better collect, organize, and analyze data from clinical trials, and in turn, use those data to improve health outcomes for society writ large. However, significant regulatory, ethical, technical, and societal considerations require caution.

From a regulatory perspective, all companies bear some responsibility to care for their users’ data, as defined in their terms of service. Unfortunately, what has been exposed in cases like a 2014 Facebook study and in research from Carnegie Mellon is that terms of service and/or privacy policies are overly complicated, no one reads them anyway, and users just blindly sign them.

Companies can demonstrate an ethical “do no harm” obligation to their users by having a straightforward and easy-to-understand data policy, and by not using personal data in inappropriate ways. An ethical framework for big data must consider identity, privacy, data ownership, and reputation. For most firms today, releasing users’ data to build predictive models without their consent would go against their established value systems. But obtaining consent may be as trivial as someone mindlessly clicking through an exorbitantly long terms-of-service agreement.

If companies are going to ask users to share their data and participate in an experiment, they should be more transparent about how the data are collected, used, and shared.

Let’s say a social network has an algorithm that analyzes a user’s activities— things they complain about, articles they share, friends’ posts they like, among other things. The AI could potentially identify a pattern suggesting the presence of a medical condition. Now imagine being able to link across social networks and also to other available data streams from wearables, sensors, and mobile devices. All of a sudden, the predictive value of these disparate data streams could become very high.

A perfect predictive system might be heralded as a medical breakthrough, but sometimes a typo is just a typo, and most people with headaches and nausea do not have brain tumors.

Using social media cues to help someone recognize that they may have the flu could prompt users to seek testing or treatment, both relatively benign and inexpensive interventions. But a cancer scare suggested under similar circumstances could carry more serious consequences. When amortized over millions of users, the potential logistical and financial implications for the healthcare system could be enormous.

While algorithm-based predictions can be useful and are widely applied in many areas of our lives now, these examples show why these same predictions carry more weight in the realm of health and health care, and therefore their use should be closely governed and monitored for potential benefits and risks.

Consumers Should Opt-In

As a clinician, I believe that consumers should be able to freely access the health data they generate across all streams.

Patients are taking an active role in their treatment plans; it ought to be medical professionals' jobs to facilitate their ability to do so.

Individuals should be able to opt in to allow providers to collect and track their data for health predictions. Companies would need to carefully determine tracking criteria for specific diseases, and at what point they would notify the user that they are at risk. Once notified, the user would have the option to receive more information or send their data directly to their healthcare provider. For this to work, new data governance and stewardship models will be required, and legal protections for people and their data will become increasingly important.

The people, companies, and organizations that hold private data have a big responsibility. If they're going to use these data to make better predictions about health and disease, then everyone needs to work together to better understand the expectations and responsibilities of all parties. The technical, legal, and social barriers are significant, but the potential for improving people’s health is tremendous.

. . .

Dr. Sam Volchenboum is the director of the Center for Research Informatics at the University of Chicago, a board-certified pediatric hematologist and oncologist, and the cofounder of Litmus Health, a data science platform for early-stage clinical trials.

Jeff Masters

Director, Global Events, Strategy & Markets Liaision - Retired at Philips

7y

Thanks for posting Sam. Important read for everyone interested in the future of healthcare.

Like
Reply

To view or add a comment, sign in

Others also viewed

Explore content categories