Machine and deep learning: Non-critical deployment

This is the second part of a four-part blog series. See part one.

In in my last blog, Machine and deep learning: Experimentation stage, I outlined the first phase of adopting machine and deep learning (ML/DL): experimentation and prototyping.

In the figure below, we show the steps that companies typically evolve through when adopting AI, ML and DL solutions. The second step is the careful use of ML/DL in products, solutions and services in a non-critical capacity. Although this is the point where ML/DL models are put into operation and customers can experience the benefits, companies are initially cautious about the use of these models as these are viewed as unpredictable. Consequently, the deployment is only done for non-critical functionality in the offering.

How the use of AI/ML/DL evolves in industry

Starting with non-critical deployment of ML/DL components is essential in order to learn about the challenges the company might experience as it moves toward the use of ML/DL components in more critical deployments, and to avoid potentially exceptional, unpredictable and erratic behavior that could severely undermine customer value derived from products, solutions and services.

In our research (see reference below), we have identified that companies that have reached this step experience a number of core challenges. These challenges are associated with the four stages of working with ML/DL components:  assembling datasets, creating models, training and evaluating, and deploying. Below we discuss the key challenge in each stage.

During the first stage where the company needs to assemble the datasets, organizations experience three main challenges:

  1. The first is accessing the data needed for training. Much of the data is collected from a variety of data silos throughout the organization, which may also impact the validation of the data because the various silos may use different semantics and schemas for related data items.
  2. Second, in most cases, ML/DL models require labeled models for supervised learning. Although the company may have the data available, it’s often far from obvious to deduce what labels need to be associated with each data item.
  3. Finally, available datasets are typically assembled for specific purposes, causing the data to not be representative of reality, but rather contain a significant over-representation of specific cases or data.

During the second stage, the engineers are concerned with creating a model that is aligned with the problem at hand and that generates the desired output, such as a classification or prediction. Whereas during experimentation and prototyping, any model that achieves some level of accuracy is acceptable, in this step the model will be exposed to customers. This requires the quality of the model to be higher, but most companies lack the skills and competencies to improve on a basic model. Doing so requires the ability to analyze which elements, algorithms or layers in the model cause the lack of accuracy as well as the ability to take corrective action to address the problem.

Companies evolve through a number of steps when adopting AI/ML/DL models
— Jan Bosch, Member of the Board

The training and evaluation stage is concerned with training and evaluating the model defined in the previous stage. The key challenge here often relates to the availability of data for training and evaluation. Although approaches such as k-fold cross-validation exist and more experienced data scientists will know how to use these, in practice the company is in the early stages of adopting AI/ML/DL solutions and the amount of available talent in the company tends to be limited.

The deployment stage is on the receiving end of the challenges experienced in the previous stages and this frequently results in a significant training-serving skew. This means that the model performs significantly worse in deployment than in training. This is typically caused by a difference between the data used during training and the data served during operations.

To summarize, companies evolve through a number of steps when adopting AI/ML/DL models. In this article, we discussed the challenges that companies experience in the second step where the company deploys the first ML/DL models in non-critical parts of products, solutions and services. The main challenges are concerned with assembling labeled data sets of sufficient quality and quantity as well as the skills of engineers to improve under-performing models. These challenges may cause a significant training-serving skew when models get deployed.

Machine and deep learning are innovative technologies that can provide incredible results and benefits. The purpose of this article was to outline the inherent challenges in order to help companies adopt ML/DL solutions while avoiding the traps outlined above.  

Good luck!

This article was originally published on janbosch.com

References

  1. 01/ Lucy Ellen Lwakatare, Aiswarya Raj, Jan Bosch, Helena Holmström Olsson and Ivica Crnkovic, "A taxonomy of software engineering challenges for machine learning systems: An empirical investigation"  — XP 2019 (forthcoming), 2019
Jan Bosch
Member of the Board

About

Jan Bosch is a professor in Software Engineering at Chalmers University of Technology in Gothenburg and has been a member of the board at Peltarion since September 2017. Apart from this, he is the director of the Software Center, runs the consulting firm Boschonian AB and is the author of several books as well as the editor for Journal of Systems and Software and Science of Computer Programming.

More from the author

Get started for free