English BERT uncased

The BERT (Bidirectional Encoder Representations from Transformers) network redefines the state of the art for Natural Language Processing (NLP).
The English BERT uncased snippet allows you to quickly get started with your language based model.

Check the Multilingual BERT cased snippet if you want to work with other languages than English.

The English BERT uncased snippet

The BERT snippet includes:

  • An Input block.
    Select an input Feature that has the Text encoding in the Datasets view.

  • A Tokenizer block configured with the English uncased vocabulary.

  • An English BERT encoder block with pre-trained weights.

  • A Dense block.
    Adjust the number of Nodes of this block to match the feature of the Target block.

  • A Target block, which may be linked to any categorical or numeric feature.

How to train the BERT snippet

Note
Disclaimer
Please note that datasets, machine-learning models, weights, topologies, research papers and other content, including open source software, (collectively referred to as “Content”) provided and/or suggested by Peltarion for use in the Platform and otherwise, may be subject to separate third party terms of use or license terms. You are solely responsible for complying with the applicable terms. Peltarion makes no representations or warranties about Content. You expressly relieve us from any and all liability, loss or risk arising (directly or indirectly) from Your use of any third party content.

The weights provided were pre-trained for a specific task, which gives BERT a general understanding of English. You could use these weights as-is and train only the blocks that come after the BERT Encoder block. However, the recommended practice is to fine-tune your entire model, including the English BERT Eencoder block, for your task.

There is a general procedure for fine-tuning pre-trained snippets.
However for BERT models, we simply recommend to fine-tune the whole model on your problem:

  • Set the BERT encoder block as Trainable like all the other blocks, set the learning rate very low, and train the whole model until the results are satisfying.

Memory consumption of BERT

The English BERT encoder is a very large model, which requires a large amount of memory to train.

The estimation of memory consumption displayed when using a BERT model is unfortunately not accurate at the moment.
As a rule of thumb, keep the product Batch size * Sequence length lower than 3000 to avoid memory issues.

If an experiment fails because the model requires too much memory, try reducing the Batch size in the experiment’s settings.
You can consider reducing the Sequence length of the input feature as well, as long as this doesn’t remove significant information.

Fine-tuning a BERT model

BERT is also a powerful model, which can learn most fine-tuning datasets very easily. This means that it is prone to catastrophic forgetting and overfitting of the new dataset when trained with inappropriate settings.

To avoid these issues, train your model with a very low Learning rate, of the order of 10-5 to 10-6.
In addition, only train for a few Epochs, between 1 and 3.

Available weights

The English BERT encoder block of this snippet uses the BERT-Base Uncased weights, pre-trained by the Google AI Language Team on BookCorpus and English Wikipedia.

Terms

When using pretrained snippets, additional terms apply: BERT with weights licence.

Was this page helpful?
YesNo