Intuit Inc.
METHOD FOR SERVING PARAMETER EFFICIENT NLP MODELS THROUGH ADAPTIVE ARCHITECTURES

Last updated:

Abstract:

A machine learning system executed by a processor may generate predictions for a variety of natural language processing (NLP) tasks. The machine learning system may include a single deployment implementing a parameter efficient transfer learning architecture. The machine learning system may use adapter layers to dynamically modify a base model to generate a plurality of fine-tuned models. Each fine-tuned model may generate predictions for a specific NLP task. By transferring knowledge from the base model to each fine-tuned model, the ML system achieves a significant reduction in the number of tunable parameters required to generate a fine-tuned NLP model and decreases the fine-tuned model artifact size. Additionally, the ML system reduces training times for fine-tuned NLP models, promotes transfer learning across NLP tasks with lower labeled data volumes, and enables easier and more computationally efficient deployments for multi-task NLP.

Status:
Application
Type:

Utility

Filling date:

2 Jan 2020

Issue date:

8 Jul 2021