Transfer learning

Transfer learning is when you want to transfer knowledge from one of your models to another model.

You keep the weights from a previously run experiment. That is, you keep what the model has learned previously.

Transfer learning is useful if you have trained on a big dataset and want to reuse parts of that knowledge to solve other, related, problems.

Pretrained blocks

When using pretrained blocks, for example EfficientNet or BERT, it is important to know what dataset the pretrained block has been trained on. Choose weights that closely resemble the dataset that you want to use for your model.

Examples on transfer learning

ImageNet

Say you want to tell if an image depicts a truck or a car. Then train you first model with ImageNet, the model then learns that a car has wheels. It was never given any information about the wheels, it was inferred from lots of pictures of cars, and the label “car”. The model represents this information in the weights it learns.

Now transfer the learning from the first trained model into a new model. The new model could now easily be used to recognize trucks, as they to have wheels. Your new model could also learn the difference between a car and a truck. Without needing to train on cars.

Autoencoder

Another possible application is when you build and train an autoencoder and then copy the encoder part (and possibly the decoder part too) separately into a new model.

Was this page helpful?
YesNo