AI in healthcare still has a long way to go

For many years, artificial intelligence (AI) technology has promised to dramatically improve the healthcare industry. Whether through the promise of increasing access to and understanding of data, providing ways to better navigate patient care, or better deciphering new research and development efforts, healthcare experts await with looking forward to the widespread use of AI. Many companies have invested billions of dollars in hopes of improving the quality and usable viability of AI in their respective fields. And, with good reason, these efforts have certainly yielded many useful results, much of which has been the foundation for continued building and innovation in this space. Nevertheless, technology still has a long way to go.

One of the main challenges in developing AI technology in healthcare has been cultivating good datasets to use as teaching models. Conceptually, the broader scope of “AI” technology uses large data sets to decipher patterns and make recommendations accordingly. However, these pattern recognition recommendations and results are only as good as the datasets provided, which can be problematic in many contexts, and especially when dealing with patient care data.

This potential introduction of bias into AI-based care has been widely discussed by key leaders. According to Dr. Paul Conway, president of policy and global affairs for the American Association of Kidney Patients, “Devices using AI and ML technology will transform the delivery of healthcare by increasing the efficiency of key processes in the treatment of patients…” However, as described by Pat Baird, Regulatory Manager of Global Software Standards at Philips, “To help support our patients, we need to become familiar with them, their medical conditions, their environment and their needs and we want be able to better understand the potentially confounding factors that drive some of the trends in the collected data…” The latter alludes to the very specific problem that many AI enthusiasts repeatedly encounter: bias due to very large datasets. small, very segmented or very inaccurate.

For example, an AI algorithm developed to provide pain medication recommendations that is based on a dataset containing only examples of cancer patients would likely not make sense to apply to the general population. After all, the painkillers needed by cancer patients are very different and probably more potent than those needed by the general population, and therefore the recommendations would be heavily skewed. This bias is just one type; extending this same potential inaccuracy and bias across ethnicities, races, socioeconomic status, and other factors can make way for dangerously inaccurate clinical decisions.

Why is this important? Because, if used correctly, AI has the potential to become a powerful force in the clinical setting. I’ve written in the past about how AI can be a valuable tool in a variety of fields, from radiology to cancer care. While it may not have the ability to replace the complexities, knowledge, and wisdom of physician-led patient care, there may indeed be a place for AI modalities as tool to augment clinical workflows.

However, for this technology to be of real added value, systems must produce high-fidelity recommendations, ensuring that they take into account accurate and representative data. Only then can physicians truly leverage this technology to make an effective and bias-free impact in the delivery of care. Indeed, innovators, healthcare leaders and healthcare providers have an important task ahead in the years to come with this technology.

Similar Posts

Leave a Reply

Your email address will not be published.