AI and healthcare

Many promises have been made regarding artificial intelligence (AI) and how it will improve the quality of healthcare today. Ambitious statements and impressive solutions have dominated the rhetoric, but how close are we really in moving towards an AI-enabled healthcare system? What is standing in the way and what still needs to be done?

When the topic of AI in healthcare is discussed, existing solutions are usually conflated with visions and expectations. Just because the technology exists, doesn't mean that it is ready to be implemented and put into production. There are several issues standing in the way of progress, including regulatory and privacy-related concerns, as well as sociocultural and pedagogical issues. Nevertheless, it is still compelling to keep watch on the emerging AI innovations for medicine.

Let's take a look at a few of the general areas where AI is, or has been said to be, pushing the envelope in 2018!

Getting to the bottom of the hype

Why introduce AI techniques into healthcare? There are many scenarios where automated systems could potentially offload tasks and empower personnel in the healthcare industry. Let’s consider a couple of such scenarios (this list is, of course, far from exhaustive!):

Medical image interpretation

Radiologists – doctors who interpret medical images from X-ray, magnetic resonance imaging (MRI), computer tomography (CT) and other experiments – typically sit in dark, quiet rooms to analyze large amounts of images on a regular basis. Like all humans, they are susceptible to fatigue and lapses in concentration and could potentially miss critical observations in the images they process each day. What if there was an automated system that could churn through all the images quickly and suggest points of interest to the doctor, thereby reducing the workload considerably? According to some estimates, images make up the bulk of all existing medical data, and therefore, automatic image analysis could be considered the low-hanging fruit in AI today.

Diagnosis decision support and text understanding

Much of the world's medical information lives in medical records written on paper or in software systems, often in a free-text form. Another major source of information is in journal articles, mostly composed of unstructured and unstandardized text. Electronic health records (EHRs) are more structured, consistent and standardized, but the data may be difficult to compare to information in other systems. Needless to say, this disparity presents significant challenges when trying to integrate information across hospitals, cities or even countries. This is a task that computers should in principle be able to do better than humans.

Imagine a helper algorithm that could sit (so to speak) next to the doctor and propose a diagnosis in response to a patient complaint. While the doctor has the training, common sense and authority, he or she cannot realistically keep up with medical case histories involving obscure diseases in all parts of the world. An algorithm, however, would in principle be able to connect the dots and propose an obscure diagnosis that the doctor could consider.

There are other AI applications related to text, such as automatic summarization of spoken or written patient health histories, which is a necessary but time-consuming part of insurance claim and reimbursement procedures.

Information from many disparate sources, like medical records, research papers and databases, needs to be integrated to enable better decision support for diagnosis.

Drug discovery

On the preclinical side of healthcare lies drug discovery – the scientific study of compounds for potential use as clinically approved pharmaceutical drugs. Pharmaceutical companies maintain large libraries of drug candidate molecules (either actual molecules or in simulated form), and drug discovery is based on understanding the relationship between a molecule's structure and its activity in the human body. Algorithms should in principle be faster than humans at sifting through masses of information on existing molecules in order to predict the activity of a known or previously unseen molecule. This could lead to an accelerated drug discovery pipeline and shorten the time required for new drugs to enter clinical use.

Deep learning systems can be used to predict a pharmaceutical compound’s effect in different situations (e.g., in different tissues) based on its molecular structure.

The players in the field

In the scenarios mentioned above, who are the important players and, more importantly, how far have they come in terms of implementing AI in the healthcare field?

Radiology/medical imaging

Radiology/medical imaging is unquestionably the area where AI, in the form of deep learning using convolutional neural networks, is making the most impact to date. In early March, I attended a seminar in Stockholm that was organized by the Wallenberg AI, Autonomous Systems and Software Program (WASP) and focused on AI and healthcare. Interestingly, nearly all of the sessions presenting concrete results on the use of AI were from the field of radiology. I believe the scenario to be similar in other countries, and when browsing technical journals in the discipline it is rare to find an issue that does not mention deep learning or machine learning.

Established radiology companies are starting to make use of deep learning techniques in their products, and new companies, specifically focusing on deep learning, are emerging. Examples of these specialized companies include Enlitic (co-founded by well-known AI lecturer and entrepreneur Jeremy Howard) and DeepRadiology. A group of Stanford researchers made a big splash in early 2017 when they published a paper titled, “Dermatologist-level classification of skin cancer with deep neural networks” in the highly prestigious journal Nature. Google has also produced some work that has garnered a fair bit of attention, including a project helping pathologists identify when a breast cancer tumor has spread to surrounding tissue, and work where images of the retina are used to assess cardiovascular health.

In the wake of these successes, some radiologists have issued warnings against over-interpreting the results and have urged machine learning researchers to consider how these algorithms will be used in an actual clinical setting. In particular, Luke Oakden-Rayner, who is both a radiologist and knowledgeable in deep learning, has criticized the relevance of a publicly available chest X-ray dataset that has been used in publications to demonstrate the efficiency of deep learning in radiology. In his blog posts (found here and here) he discusses at length various issues with regard to the dataset and the interpretations that have been made based on the models. Medical student John Zech also cautions us to be skeptical of implausible claims about deep learning.

Text understanding

Google recognizes the problems associated with the poor interoperability of electronic health record systems. Recently, it announced a new implementation of the Fast Healthcare Interoperability Resources (FHIR) that will enable Google to ingest standardized medical data into its systems (such as BigQuery) and apply large-scale machine learning tools to the data.

UK company BenevolentAI says they can automatically extract disease and drug-related information from “every medical paper” and put those results in context. It will be really interesting to see their results as this could potentially accelerate the pace of medical research enormously. Furthermore, IBM’s Watson Health has also declared the ability to process and analyze millions of papers and base diagnoses on these data, yet some U.S. doctors and investors have been critical towards the system.

Text understanding for diagnostics in a narrower domain, for example in combination with radiology AI models as discussed above, is a more tractable problem. This blog post by Hugh Harvey, who is a radiologist and clinical AI researcher, outlines five promising ideas for “quick-win,” non-image-analysis AI products in radiology, including text-processing solutions.

Drug discovery

In the area of drug discovery, deep learning-based AI technologies have been touted as paradigm-shifters, which has given some industry veterans chills as they remember the hype around “rational drug design” in the 1980s, and the promises that were never fulfilled.

In contrast to medical imaging, where variations of convolutional networks (ResNet, UNet, etc.) rule the day, drug discovery has made use of more exotic-sounding techniques such as variational autoencoders, generative adversarial networks and reinforcement learning techniques. The idea is to be able to rapidly generate realistic leads for new potential drug molecules that can then be tested in software and in a lab. A good way to weed out molecules that are unlikely to succeed can significantly reduce drug development time.

In drug discovery, deep learning can be used to “dream up” new molecules with a specified type of activity using techniques such as variational autoencoders and generative adversarial networks. In this example, an AI system generates novel molecular structures, scores them and keep track of the previously seen scores.

Additionally, more specialized companies like BenevolentAI and Atomwise are also applying deep learning techniques to their drug discovery efforts.

Similar to the other previously mentioned areas, critical voices have questioned how relevant the artificial intelligence methods are for actual drug development. Researcher Mostapha Benhenda has called AI in drug discovery overhyped, and released a list containing his view of the least and most overhyped industrial and academic players.

So where do we stand?

Revolutionary progress has been made in AI over the past 5 to 10 years. The increase in available data, significant improvements in computational power and better algorithms have made it feasible to implement AI for use cases not possible before. Clearly, healthcare, medical research and drug discovery are all very interesting fields for applying this type of AI solution. They have the potential to lessen the cognitive load on doctors, nurses and researchers, speed up pharmaceutical development and provide a much broader data-driven foundation for making diagnoses. However, there are also a number of challenges involved in leveraging these new and rather raw technologies. These include the ability of managing data without sacrificing safety and privacy, a lack of up-to-date regulations, problems related to model transparency and explainable predictions, and significant engineering overheads in putting these techniques into production.

We should, however, be clear that we are only at the very beginning of this development, and we must be careful not to read too much into press releases hyping the latest achievement. Instead, we should read with a critical eye, and always keep in mind how AI techniques can be used by people, and together with people. Specifically, we believe that the rationale for predictions made by algorithms needs to be presented in a way that builds medical practitioner trust in order to incorporate them into the decision-making process.

In the short term, radiology applications, such as image analysis and auxiliary diagnostics-related products, are likely to remain the most promising fields for AI innovations. Another type of application that could take off soon is automatic summarization and speech-to-text conversion of medical case histories. A number of companies are working on structured, automatized medical history-taking, which will open up new possibilities for machine learning to model disease progression and interdependencies between medical conditions.

References

  1. 01/ Stategy Analytics: A brave new world in machine learning 2018  — John Wiley & Sons
  2. 02/ Artificial intelligence  — Wikipedia
Mikael Huss
Data Scientist

About

Mikael Huss, Data Scientist at Peltarion. Holds a PhD in Computational Neuroscience and an Associate Professorship in Bioinformatics, both from the Royal Institute of Technology (KTH) in Stockholm. Mikael has worked as an academic researcher for 10+ years, as a part-time freelance data scientist helping out smaller companies for 5 years, and more recently as a Senior Data Scientist at IBM before joining Peltarion.

More from the author

Contact