Data science /

Challenges of implementing AI in healthcare

November 25 2018/5 min read

The potential of AI in healthcare is surging, and its possibilities are well beyond that of just assisting doctors in providing simple diagnoses. According to an Accenture report, growth in the AI healthcare market is expected to reach $6.6 billion by 2021, a compound annual growth rate of 40 percent. However, the adoption of AI in healthcare is still in early days, due to a number of challenges impeding its momentum.

In my previous blog post on AI and healthcare, I discussed some of the areas where AI is pushing the envelope, yet there are currently a few challenges standing in the way of even greater adoption within the medical field. This blog post explores some of the challenges hampering the implementation of AI in healthcare today.


Privacy, while important in every industry, is typically enforced especially vigorously when it comes to medical data. Since patient data in European countries is typically not allowed to leave Europe, many hospitals and research institutions are wary of cloud platforms and prefer to use their own servers.

For startup companies, it’s hard to get access to patient data to develop products or business cases. Usually, this is easier for medical researchers, who can make use of standard application procedures meant to facilitate research based on patient clinical data.


AI algorithms meant to be used in healthcare (in Europe) must apply for CE marking. More specifically, they need to be classified according to the Medical Device Directive, as explained very well in this blog post by Hugh Harvey. Stand-alone algorithms (algorithms that are not integrated into a physical medical device) are typically classified as Class II medical devices.

The General Data Protection Regulation (GDPR) directives introduced in May 2018 will also lead to a number of new regulations that needs to be complied with and that are, in some cases, not clear-cut. For example, some degree of transparency in automated decision-making (see below) will be required, but it‘s hard to tell from the directives what level of transparency will be enough, so we’ll probably need to await the first court cases to learn where the border lies. Other issues are likely to result from the requirement for informed consent. For example, will it still be possible to perform research on dementia under the new regulations, considering some of the participating individuals may not be able to give informed consent?


Despite potential difficulties in establishing parameters, transparency of decision support is, of course, paramount to medical AI. A doctor needs to be able to understand and explain why a certain procedure was recommended by an algorithm. This necessitates the development of more intuitive and transparent prediction-explanation tools. There is often a trade-off between predictive accuracy and model transparency, especially with the latest generation of AI techniques that make use of neural networks, which makes this issue even more pressing. An interesting viewpoint on transparency and algorithmic decision-making is given in a paper named Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR, which was co-written by a lawyer, a computer scientist and an ethicist.


Doctors make decisions based on learned knowledge, previous experience and intuition, and problem-solving skills. Getting doctors to consider suggestions from an automated system can be difficult. It’s likely that some elements of AI literacy need to be introduced into medical curricula so that AI is not perceived as a threat to doctors, but as an aid and amplifier of medical knowledge. In fact, if AI is introduced in a way that empowers human workers rather than displacing them, it could free up their time to perform more meaningful tasks or grant more resources to employ more workers.

Engineering/technical debt

The latest techniques in AI making use of deep neural networks have reached amazing performance in the last five to seven years. However, the tooling and infrastructure needed to support these techniques are still immature, and few people have the necessary technical competence to deal with the whole range of data and software engineering issues. Especially in medicine, AI solutions will often face problems related to limited data and variable data quality. Predictive models will need to be re-trained when new data comes in, keeping a close eye on changes in data-generation practices and other real-world issues that may cause the data distributions to drift over time. If several data sources are used to train models, additional types of “data dependencies,” which are seldom documented or explicitly handled, are introduced.

In medical applications, transfer learning — using a pre-trained model and adapting it to one’s specific use case — is often applied, but then a “model dependency” is introduced where the underlying model may need to be retrained or change its configuration over time. The large amount of “glue code” typically needed to hold together an AI solution, together with potential model and data dependencies, makes it very difficult to perform integration tests on the whole system and make sure that the solution is working properly at any given time.

An operational AI platform such as the one we are building at Peltarion, handling the entire modeling process including software dependencies, data and experiment versioning as well as deployment, has the potential to solve many of these engineering and technical debt issues.


There is a lot of promise for AI in healthcare, but efforts and advances in many areas need to be made before AI solutions can be deployed in a safe and ethical way. Regulation, privacy and sociocultural aspects need to be addressed by society as a whole, but AI software tools such as the Peltarion platform can help mitigate some of the challenges related to engineering and technical debt issues. With an operational AI platform, an AI developer can avoid having to worry about software library dependencies, inconsistencies in input data processing steps and the inadvertent introduction of bugs into production code.

Want to know more about AI in healthcare? Join us for a series of free webinars to learn how to bring operational AI into your healthcare organization.

  • Mikael Huss

    Mikael Huss

    Data Scientist

    Mikael Huss is a Data Scientist at Peltarion. He holds a Ph.D. in computational neuroscience and serves as an associate professor in bioinformatics, both from the KTH Royal Institute of Technology in Stockholm. Mikael has worked as an academic researcher for 10+ years, as a part-time freelance data scientist helping out smaller companies for five years, and more recently as a senior data scientist at IBM before joining Peltarion.

02/ More on Data science