Delivering responsible AI to the healthcare and life sciences industries
The COVID-19 pandemic has revealed shocking data on health inequalities. In 2020, the National Institutes of Health (NIH) released a report showing that Black Americans have died from COVID-19 at higher rates than white Americans, despite making up a smaller percentage of the population. According to the NIH, these disparities are driven by limited access to care, inadequate public policies, and the disproportionate burden of comorbidities, including cardiovascular disease, diabetes, and pulmonary disease.
The NIH also found that between 47.5 million and 51.6 million Americans cannot afford to see a doctor. Historically underserved communities are more likely to seek medical advice using generative transducers, especially those unknowingly embedded in search engines. When an individual goes to a popular search engine with an embedded AI agent and “My father can no longer afford the heart medication prescribed to him. What over-the-counter medications might work instead?”
According to researchers at Long Island University, ChatGPT is inaccurate 75% of the time, and according to CNN, the chatbot has sometimes given risky advice, such as approving a combination of two drugs that can cause serious side effects.
Given that generative converters do not understand the semantics and will have incorrect output, historically underserved communities that use this technique instead of professional help may be harmed much more than other communities.
How can we proactively invest in AI for more equitable and trustworthy outcomes?
With today’s new generative AI products, trust, security, and regulatory issues remain top of mind for government health officials and C-suite executives representing biopharmaceutical companies, health systems, medical device manufacturers, and other organizations. Generative AI requires AI governance, including conversations about appropriate use cases and guardrails for safety and trust (see AI US Blueprint for AI Bill of Rights, EU AI ACT, and White House AI Executive Order).
Curating AI responsibly is a sociotechnical challenge that requires a holistic approach. Many factors are needed to gain people’s trust, including ensuring that AI models are accurate, auditable, explainable, fair, and protect people’s data privacy. And institutional innovation can play a helpful role.
Institutional Innovation: A Historical Record
Institutional change is often caused by cataclysmic events. Consider the development of the U.S. Food and Drug Administration (FDA), whose primary role is to ensure that food, drugs, and cosmetics are safe for the public. The roots of this regulatory agency can be traced back to 1848, but monitoring the safety of drugs did not become a direct concern until 1937, the year of the Elixir Sulfanilamide disaster.
Developed by a respected Tennessee pharmaceutical company, Elixir Sulfanilamide was a liquid medication known to dramatically treat strep throat. As was often the case at the time, the drug was not tested for toxicity before hitting the market. This turned out to be a fatal mistake. That’s because the elixir contained diethylene glycol, a toxic chemical used in antifreeze. More than 100 people have died from taking the toxic elixir, which led the FDA to require under the Food, Drug, and Cosmetic Act that the drugs be labeled with proper instructions for safe use. This important milestone in FDA history ensured that doctors and patients could have full confidence in the strength, quality, and safety of their drugs – assurances we take for granted today.
Likewise, ensuring equitable outcomes from AI requires institutional innovation.
Five key steps to ensure your generative AI supports the communities it serves
The use of generative AI in healthcare and life sciences (HCLS) will require the same kind of institutional transformation that the FDA called for during the Elixir Sulfanilamide disaster. The following recommendations can help all AI solutions achieve more equitable and just outcomes for vulnerable populations.
- We operate based on principles of trust and transparency. Fairness, explainability and transparency are big words. But what does this mean in terms of the functional and non-functional requirements of an AI model? We can tell the world that our AI models are fair, but we need to train and audit them to ensure they are serving the historically most underserved populations. For AI to gain the trust of the communities it serves, it must have proven, repeatable, explainable, and trustworthy outputs that perform better than humans.
- Appoint individuals to be accountable for equitable outcomes from the use of AI in your organization. Then give them the strength and resources to do the hard work. There is no trust without accountability, so make sure your domain experts are well-funded to do the job. Someone must have the strength, mindset and resources to do the work required for governance.
- Empower domain experts to curate and maintain trusted data sources used to train models. These trusted data sources can provide the content foundation for products that use large language models (LLMs) to provide language transformations for answers that come directly from trusted sources (e.g., ontologies or semantic search).
- Demand that the output be audited and accounted for. For example, some organizations are investing in generative AI to provide medical advice to patients or doctors. To encourage institutional change and protect all populations, these HCLS organizations should be audited to ensure accountability and quality control. The output of these high-risk models must provide test-retest reliability. The output must be a 100% accurate and detailed data source with evidence.
- Transparency is needed. As HCLS organizations integrate generative AI into patient care (e.g., in the form of automated patient intake when checking in at a U.S. hospital or in the form of helping patients understand what will happen during a clinical trial), generative AI models will: The patient must be informed that this is the same as: use. Organizations must also provide patients with interpretable metadata detailing the accountability and accuracy of that model, the sources of training data for that model, and the results of audits of that model. The metadata should also show how users can opt out of using the model (and how they can get the same service elsewhere). As organizations use and reuse synthetically generated text in healthcare settings, people need to be informed about what data is and is not synthetically generated.
We believe we can and should learn from the FDA to institutionally transform our approach to transforming operations with AI. The journey to gaining people’s trust begins with changing systems so that AI better reflects the communities it serves.
Learn how to integrate responsible AI governance into your business structure.