mdl

NLP Modeling Library

This library provides a set of Keras primitives (tf.keras.Layer and tf.keras.Model) that can be assembled into transformer-based models. They are flexible, validated, interoperable, and both TF1 and TF2 compatible.

Please see the colab NLP modeling library intro.ipynb for how to build transformer-based NLP models using above primitives.

Besides the pre-defined primitives, it also provides scaffold classes to allow easy experimentation with noval achitectures, e.g., you don’t need to fork a whole Transformer object to try a different kind of attention primitive, for instance.

Please see the colab customize_encoder.ipynb for how to use scaffold classes to build noval achitectures.

BERT and ALBERT models in this repo are implemented using this library. Code examples can be found in the corresponding model folder.