Microsoft Corporation
EFFICIENT TRANSFORMER LANGUAGE MODELS WITH DISENTANGLED ATTENTION AND MULTI-STEP DECODING

Last updated:

Abstract:

Systems and methods are provided for facilitating the building and use of natural language understanding models. The systems and methods identify a plurality of tokens and use them to generate one or more pre-trained natural language models using a transformer. The transformer disentangles the content embedding and positional embedding in the computation of its attention matrix. Systems and methods are also provided to facilitate self-training of the pre-trained natural language model by utilizing multi-step decoding to better reconstruct masked tokens and improve pre-training convergence.

Status:
Application
Type:

Utility

Filling date:

24 Jun 2020

Issue date:

28 Oct 2021