The embedding block represents a layer that can be used to turn a categorical variable, expressed as an integer (different integer values correspond to different categories), into a vector of real numbers.
This can be used as an alternative to categorical encoding, and since the vector is trainable we can expect categories that have a similar effect on the model to be mapped to similar vectors over time.
The name indeed comes from the fact that this process creates an embedding space for the categorical variable, where we can observe similarities and differences between categorical labels.
Embedding layers can only be used immediately after an Input block.
Input vocabulary size: The number of possible integer values, e.g., the unique values of the categorical variable.
Vocabulary size = Number of classes = number of entries in the vocabulary.
Embedding size: The dimension of the embedding space, i.e., the size of the vector.
Trainable: Whether we want the training algorithm to change the value of the weights during training. In some cases, one will want to keep parts of the network static, for instance when using the encoder part of an autoencoder as preprocessing for another model.