Multilingual BERT cased
The BERT (Bidirectional Encoder Representations from Transformers) network redefines the state of the art for Natural Language Processing (NLP).
The Multilingual BERT cased snippet allows you to quickly get started with your language based model.
Why use a multilingual model?
A multilingual model allows you to deploy a single model able to work with any of the 100 known languages.
More than a simple convenience, multilingual models often perform better than monolingual models.
One reason is that the training data available is generally more limited in any single language.
In addition, many languages share common patterns that the model can pick up more easily when it is trained with a variety of languages.
The Multilingual BERT cased snippet
The Multilingual BERT snippet includes:
-
An Input block.
Select an input Feature that has the Text encoding in the Datasets view. -
A BERT Tokenizer block configured with the Multilingual cased vocabulary.
-
A Multilingual BERT encoder block with pre-trained weights.
-
A Dense block.
Adjust the number of Nodes of this block to match the feature of the Target block. -
A Target block, which may be linked to any categorical or numeric feature.
How to train the BERT snippet
Note
|
Disclaimer Please note that datasets, machine-learning models, weights, topologies, research papers and other content, including open source software, (collectively referred to as “Content”) provided and/or suggested by Peltarion for use in the Platform and otherwise, may be subject to separate third party terms of use or license terms. You are solely responsible for complying with the applicable terms. Peltarion makes no representations or warranties about Content. You expressly relieve us from any and all liability, loss or risk arising (directly or indirectly) from Your use of any third party content. |
The weights provided were pre-trained for a specific task, which gives BERT a general understanding of over 100 languages. You could use these weights as-is and train only the blocks that come after the Multilingual BERT encoder block. However, the recommended practice is to fine-tune your entire model, including the Multilingual BERT encoder block, for your task.
There is a general procedure for fine-tuning pre-trained snippets.
However for BERT models, we simply recommend to fine-tune the whole model on your problem:
-
Set the Multilingual BERT encoder block as Trainable like all the other blocks, set the learning rate very low, and train the whole model until the results are satisfying.
Memory consumption of BERT
The Multilingual BERT encoder is a very large model, which requires a large amount of memory to train.
The estimation of memory consumption displayed when using a BERT model is unfortunately not accurate at the moment.
As a rule of thumb, keep the product Batch size * Sequence length lower than 3000 to avoid memory issues.
If an experiment fails because the model requires too much memory, try reducing the Batch size in the experiment’s settings.
You can consider reducing the Sequence length of the input feature as well, as long as this doesn’t remove significant information.
Fine-tuning a BERT model
BERT is also a powerful model, which can learn most fine-tuning datasets very easily. This means that it is prone to catastrophic forgetting and overfitting of the new dataset when trained with inappropriate settings.
To avoid these issues, train your model with a very low Learning rate, of the order of 10-5 to 10-6.
In addition, only train for a few Epochs, between 1 and 3.
Recommended settings:
-
Batch size: 6 when Sequence length is 512,
10 when Sequence length is 100. -
Epochs: 3 or less.
-
Learning rate: 0.00001 or lower.
Available weights
The Multilingual BERT encoder block of this snippet uses the BERT-Base, Multilingual Cased weights, pre-trained by the Google AI Language Team on Wikipedia.
Terms
When using pretrained snippets, additional terms apply: BERT with weights licence.
Reference
Jacob Devlin et al.: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, 2019.