Applied AI & AI in business /

From idea to AI powered applications, the challenges (part 1)

April 23/5 min read
  • Reynaldo Boulogne
    Reynaldo Boulogne

This is part 1 of a 2 part series. You can find part 2 here.

If you’re like me, you’re mesmerized by all the possibilities of using deep learning to create never-before-possible solutions and applications.

My mind tingles when I start thinking of all the cool applications out there that have yet to be created.

While this is an exciting idea, the reality is that creating a deep learning powered application is rarely straightforward, particularly if it’s your first one. And I’m not talking about the challenges of learning deep learning and having access to data (although this is obviously already quite a hurdle to overcome). That is only part of the story. There are quite a few additional challenges that make it tricky to go from an idea to a final product which have nothing to do with deep learning itself.

To talk about these challenges, I’m going to group them into two categories and explore them over the course of two articles:

  • Model development lifecycle - Part 1 (this article)
  • Model serving and integration - Part 2 (click here)

Both articles are going to be written from the point of view of someone that already knows deep learning theory and thus knows how to build a good model.

Let’s dive into it!

Model development lifecycle challenges

A model’s lifecycle includes everything from building, training, evaluating, tweaking, deploying/serving, and updating the model.

Deep learning model development lifecycle (Original)

As long as you’re working on a toy problem you will probably have no problem carrying out each of these steps on your own computer (provided you have a GPU) or on something like Google Colab.

But as soon as you start building a model aimed for an application, things get a bit trickier:

  • Computing power - to achieve the level of model performance you’re after you either will have to train on large datasets, explore multiple models to establish a baseline (Model Exploration loop) and/or tune your model repeatedly ideally running experiments in parallel to save time (Model Refinement loop). 

    This can be very compute-intensive and you run the risk of running out of computing power in this part of the iterative process, whether that’s because you’re maxing out your GPU by trying to run experiments in parallel or because you're hitting the resource limits of freely available online tools.
  • Long development times - A consequence of the above is that it can take a lot of time to build, train, evaluate and tweak your models until you get the performance you’re looking for, even on the simplest of datasets. 

    Having to wait hours for each Model Exploration or Refinement loop to progress enough before you can decide whether the model is performing as you want, and repeating this over and over again, is painfully slow and can be demotivating.
  • Keeping track of things - As with any software project, it’s good practice to systematically keep track of the different stages, changes, and configurations of your deep learning model throughout its development lifecycle.

    Being an iterative process it means recording multiple changes and tweaks to your data, model architecture, and model hyperparameters in the pursuit of the desired results. But keeping track of who changed what, when and why in this iterative process can quickly become an overwhelming task. 

    If you’re like me and you’ve done deep learning projects in the past, you quickly end up with a number of files on your laptop labeled ‘latest_dataset.zip’, ‘latest_latest_dataset.zip’, ‘final_dataset.zip’, etc…. you get the idea. This gets even worse when keeping track of the different trained models, their hyperparameters, and their results.
  • Managing experiments/results - Something that follows from the above is that it can also be difficult to compare experiments, results, hyperparameter settings, what versions of training data and model architectures where used, etc. in order to analyze what worked and what didn’t. 

    It’s easy to lose track of the differences between multiple experiments, which makes it difficult to decide what the next experiment should look like. 
  • Dealing with non deep learning related stuff - Each time you start a project you spend countless hours on stuff that is not related to developing your model. Things such as: 
    - time spent debugging your deep learning code
    - looking for and/or implementing deep learning models that one wants to try
    - setting up and managing storage, compute and deployment resources

We built the Peltarion with these problems in mind or rather, with the idea of helping users skip all of the above problems, so that they could focus on building their applications:

  • The training power you need, zero setup required - The Peltarion platform gives you access to all the GPU resources you need, without you having to spend time configuring anything. Train one or multiple experiments at the same time without worrying about maxing out resources. We will automatically scale the computing resources for you to meet your needs.
  • Version handling from start to finish - The platform automatically keeps track and versions all changes and tweaks you perform on datasets, model architecture, hyperparameters, as well as recording the results of each experiment that is run. This, combined with the fact that the whole lifecycle of your project is in one place, nothing gets ever lost, so you can have guaranteed traceability, reproducibility and reusability of your work
  • Easy experimentation - The platform is built around the idea that model development requires running multiple experiments, which is why building, configuring, running, and comparing multiple experiments is a breeze on the platform.
  • Production-ready deep learning model implementations - Use the latest neural network architectures and techniques without needing to implement them yourself. Focus on modeling, not on software engineering, and stop spending time debugging code.
  • Shorter time to delivery - Combined, all the features above allow to drastically shorten project development times, as well as making it easier to regularly update or test new models, and continuously roll them out into production.
  • No-code environment - Are you not familiar with deep learning frameworks? No problem! The Peltarion platform is a no-code environment aimed to enable users to start building their own models either by building them from scratch using modular drag-and-drop building blocks or by getting started with premade / pretrained networks.

But don’t take our word for it, why not try it out for yourself? It’s free!

Click here to find out more about the platform and how it can make your work-life easier. And if you need some inspiration to get started, why not check out these deep learning tutorials.

This is only part 1 of a 2 part series. If you’ve read this far, you’re probably going to like part 2 where I will be discussing what it takes to use a model that you’ve trained in an application. See you there!

  • Reynaldo Boulogne

    Reynaldo Boulogne

    With over 15 years of experience, Reynaldo has worked within the intersection of business and technology across multiple sectors, most recently at Klarna and Spotify. He is passionate about innovation, leadership, and building things from scratch. Reynaldo is also a former Vice-chairman of the Stockholm based AI forum, Stockholm AI.