The tokenizer block converts plain text into a sequence of numerical values, which AI models love to handle. The same block can process text written in over 100 languages thanks to the WordPiece method.
How does text tokenization work?
The tokenizer splits the input text into small pieces, called tokens.
There can be more tokens than words if parts of a word (like prefixes and suffixes) are more common than the word itself.
The Sequence length is enforced by truncating or padding the sequence of tokens.
The sequence of integers is ready to be processed by one of the language processing blocks.
Sequence length: The total number of tokens kept in the sequence. It’s necessary to fix the sequence length, since models require fixed size inputs.
If the text input is longer than the Sequence length, the end of the text will be ignored.
If the text input is smaller, the sequence will be padded with
Choose a length that matches your typical text size to utilize all the data while avoiding unnecessary calculations on the padding tokens.
Vocabulary: The known vocabulary used to tokenize the text and assign numerical values.
Use English uncased if you connect the tokenizer block to an English BERT encoder block. Letter case (capitalization) in the text is ignored.
Use Multilingual cased if you connect the tokenizer block to a Multilingual BERT encoder block. Letter casing (capitalization) is preserved to get additional linguistic information. For example,
I get different token values.