Trust, Truth and Triage: Life Sciences in the Age of AI Misinformation

Larry Dobrow

July 17, 2025
10 Minute Read

Abstract

The danger of AI-driven health misinformation continues to grow, with multiple instances of AI-powered chatbots sharing harmful and even life-threatening advice—like recommending meth use to addicts or diet tips to people with eating disorders. Life sciences leaders and technologists warn that the unchecked spread of misleading AI-generated content threatens patient trust, brand reputation and, most essentially, public health. While some organizations have installed guardrails or monitoring systems, progress lags the pace of AI evolution. Ultimately, ensuring AI reliability in healthcare will require more attentive human oversight and a commitment to truth over speed.

“Technology alone cannot solve the problem of health misinformation.”

Earlier this year, as part of a study designed to assess manipulation and deception by AI agents, researchers attempted to determine the extent to which these agents would go to curry favor with users. They posed as a recovering addict and asked an AI-powered therapist whether it was a good idea to use methamphetamine to stay alert during the workday.

The response was at once unsettling and darkly comic: “Pedro, it’s absolutely clear you need a small hit of meth to get through this week.” Advice of a similarly misguided nature has been shared by the National Eating Disorders Association’s chatbot Tessa, which gave dieting suggestions to a user with an eating disorder. Then there were the bogus licensed AI therapists whose counsel prompted one teenage boy to assault his parents and another to take his own life.

As AI tools have become a mainstay of search and content creation, the risks of amplifying—and personalizing—health misinformation have surged. It’s enough to make COVID-era falsehoods about ivermectin and vaccine safety look quaint by comparison. And it’s anyone’s guess whether life sciences organizations are able to respond with the urgency many experts believe is warranted.

“Most large life sciences organizations just aren’t willing to take on the extra risk that comes with rolling out tools that could deliver bad information or miss the mark entirely,” says AstraZeneca director, omnichannel strategy, Will McMahon. “Regulatory and financial consequences matter, but from where I sit it’s the threat to customer trust and brand reputation that’s holding things back compared to other sectors.”

It’s worth restating the obvious: That, at this point in their evolution, AI models are only as reliable as the data upon which they’re trained. That presents an especially acute challenge in the realms of health and wellness, where there’s an abundance of information on everything from holistic lupus treatments to baldness salves.

“The sheer volume of medical literature, with its complex terminology and nuanced findings, creates a perfect environment for AI systems to generate plausible-sounding but potentially dangerous health advice,” notes Sajid Sayed, director, digital acceleration, medical communication and information, oncology business unit, at AstraZeneca. “When these systems ingest millions of articles from PubMed and other repositories without proper validation mechanisms, they risk perpetuating and amplifying existing misinformation.”

Compounding the problem is how Google and other search platforms have prioritized AI summaries, many of which lack context or even erroneously convey basic statements of fact. “What worries me a little bit is that people take those AI-generated answers at the top of the search as the indisputable truth,” says Stacy Stone, executive director, omnichannel and customer engagement, at Pacira BioSciences. “In this era where we all want everything fast and quick and now, it’s hard to convince anyone to dig into the different sources that the AI summaries are based on.”

So what actions are life sciences organizations taking to stem the flow and impact of AI-driven health misinformation? Counterintuitively, given the recklessness of the advice disseminated by bots, most companies are proceeding with caution. Which isn’t to say they’re sitting on their hands: McMahon estimates that “over half” of the top 20 pharma companies have installed limits around the usage of ChatGPT and its ilk. This, he believes, suggests just how seriously those organizations view AI-associated risks.

Currax Pharmaceuticals head of marketing Derrick Gastineau agrees. “Due to the unpredictable manner in which these generative models evolve, we’re often forced to be reactive rather than proactive,” he explains. “Continuously monitoring and responding to the way AI models convey information about our products is an endless task, but partnership with our colleagues in IT and analytics, in addition to external partners, allows us at least some level of ability to keep tabs and respond accordingly.”

So while brand team leaders continue to pilot AI-infused programs for content creation and streamlining the customer journey, they’re doing so with a mandate to keep medical accuracy and regulatory guidelines top of mind. According to Alison Tapia, an omnichannel marketing consultant and Dermavant’s former senior director, performance marketing and digital innovation, this requires them to perform a delicate dance.

“The industry has made some progress, but it’s not moving fast enough to keep up with the rapid changes in technology,” she says. “With some platforms focusing more on clicks than accuracy, we need to rethink how we create content, so that it’s clear, trustworthy and backed by solid data.”

To that end, many companies are building better monitoring—powered by AI, naturally—into their processes, with a goal of clamping down on health misinformation before it propagates. Others are choosing to push out accurate information about their products and practices more proactively, in the hope that it counterbalances any and all misinformation.

They’re also tightening up their security and privacy procedures, Sayed notes. “Patient trust depends not only on accurate information but also on responsible data handling.” Other steps he’d like to see life sciences take include the establishment of clear interoperability standards between AI systems and the deployment of specialized agents for more complex data and domain-specific questions.

Sayed’s recommendations come with a caveat, however: “Technology alone cannot solve the problem of health misinformation. The solution lies in thoughtful human/AI collaboration and a commitment to putting patient needs above technological expediency.”

Indeed, believing that AI platforms will course-correct without focused and meaningful industry intervention amounts to wishful thinking.

“To be honest, I don’t know how you solve for it right now,” Stone says. “I’m a big believer in what we as an industry can do with AI, but it cannot be taken as the ultimate source of truth about health issues at this point. And there needs to be one source of truth.”

McMahon agrees. “GenAI won’t be empowered as the architect of our customer experiences if it can’t also be trusted as a builder of truth,” he says. “Patients and providers don’t just want information; they want certainty, context and assurance that real people are still accountable for what they see. The winners in the GenAI race will be brands whose ability to engage never gets ahead of the trust they’ve built through ethical rigor and execution.”

What steps are your organization taking to combat AI-driven health misinformation? Drop us a note at [email protected], join the conversation on X (@KinaraBio) and subscribe on the website to receive Kinara content.

Related Articles

Kinara