Running several experiments to find the best one is best practice when working with deep learning. Your first experiment is almost never the best one.
We at Peltarion suggest that you run several experiments where you change models or parameter settings. Then you evaluate the experiments. This is the best way to improve your models and find the best performing model.
Tips for beginners and intermediate users
Set the foundation first
Before you start to improve, make sure that you’ve set the foundation to achieve great results.
Make sure that your problem is correctly formulated.
Are you dealing with a classification or with a regression problem?
Did you choose informative and independent features as input for your model?
Once you have a clear idea of your problem it becomes very easy to choose the input and target features and the model to use in the Experiment wizard.
How and where? Click New experiment button or Use in new experiment button to open the Experiment wizard.
Prepare your data.
Data preprocessing is one of the most important steps to get better results. When you are dealing with tabular data or images, consider normalizing your dataset. Normalization allows you to rescale your data to a common scale without distorting the differences in the range of values or losing information. For example, normalization is useful when you have variables with different measure units because the features can have different scales Differences in the scales across input variables may increase the difficulty of the problem being modeled. There are different techniques for data normalization, like standardization and min-max normalization. Another preprocessing step might be handling null values in your dataset.
How and where? You normalize a feature in the Datasets view. Click on the wrench and select type of Normalization (available for data encoded as image or numeric).
Make sure to have enough data.
Data is at the core of any deep learning project and the dataset size is often responsible for poor performance. Find a compromise between model complexity and dataset size. You can use random transformation to add more variation to the training dataset. If it is done right, it reflects the variation in the real data and therefore helps the model to generalize better.
Random transformation only works on images and image-like data. It is mostly useful for images but may be interesting to use on any 3 axis tensor for advanced users.
How and where? Add a Random transformation block after the Input block in the Modeling view.
Use pretrained blocks.
Very useful if you have a small dataset. When you use pretrained blocks you do transfer learning. Transfer learning is when you train a model on a large dataset (such as ImageNet for image data) and want to transfer parts of that knowledge to solve another, related problem. With transfer learning you can improve your model generalization and achieve better performance. We have a lot of pretrained blocks for different tasks.
How and where? Select a pretrained block in the Modeling view.